confluentinc / kafka-images

Confluent Docker images for Apache Kafka
Apache License 2.0
31 stars 137 forks source link

Translating properties to docker variables setting in Kafka #156

Open utkarshsaraf19 opened 2 years ago

utkarshsaraf19 commented 2 years ago

Hi All,

I am trying to set below variables in server.properties

listener.name.sasl_plaintext.plain.sasl.server.callback.handler.class
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config

As per me, the translation them in docker is as follows:

KAFKA_LISTENER_NAME_SASL_PLAINTEXT_PLAIN_SASL_SERVER_CALLBACK_HANDLER_CLASS
KAFKA_LISTENER_NAME_SASL_SSL_SCRAM-SHA-256_SASL_JAAS_CONFIG

They are now working for me in confluent docker image of version 6.1.0. Kindly help.

andrewegel commented 2 years ago

They are now working for me in confluent docker image of version 6.1.0

I think you mean not working - Otherwise why are you opening an issue? 🤣

Have a read here: https://docs.confluent.io/platform/current/installation/docker/config-reference.html

Convert to upper-case.
Separate each word with _.
Replace a period (.) with a single underscore (_).
Replace a dash (-) with double underscores (__).
Replace an underscore (_) with triple underscores (___).

And you can see what this logic is here that does this translation: https://github.com/confluentinc/confluent-docker-utils/blob/master/confluent/docker_utils/dub.py#L50-L98

TL;DR:

I believe this is what you need to define:

KAFKA_LISTENER_NAME_SASL___PLAINTEXT_PLAIN_SASL_SERVER_CALLBACK_HANDLER_CLASS=...
KAFKA_LISTENER_NAME_SASL___SSL_SCRAM__SHA__256_SASL_JAAS_CONFIG=...

Please close this issue if this helps you.

utkarshsaraf19 commented 2 years ago

It is still same.The properties of kafka brokers setup by me in docker is as follows:

      KAFKA_LISTENERS: SASL_PLAINTEXT://kafka:9092
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://kafka:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      ZOOKEEPER_SASL_ENABLED: "true"
      KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf -Dxyzconfig.file.path=/etc/kafka/xyzconfig.yaml"
      KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_LISTENER_NAME_SASL___PLAINTEXT_PLAIN_SASL_SERVER_CALLBACK_HANDLER_CLASS: ext.security.authentication.SimpleXYZAuthentication
      KAFKA_AUTHORIZER_CLASS_NAME: ext.security.authorization.SimpleXYZAuthorizer
      KAFKA_SUPER_USERS: User:admin
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

the properties of zookeeper are set as follows:

 ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/zookeeper/zookeeper_server_jaas.conf -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -Dzookeeper.requireClientAuthScheme=sasl"

The logs show no sign of authentication in this case for docker:

    zookeeper.ssl.protocol = TLSv1.2

    zookeeper.ssl.truststore.location = null

    zookeeper.ssl.truststore.password = null

    zookeeper.ssl.truststore.type = null

    zookeeper.sync.time.ms = 2000

 (kafka.server.KafkaConfig)

[2022-05-02 13:23:04,439] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2022-05-02 13:23:04,440] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2022-05-02 13:23:04,442] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2022-05-02 13:23:04,443] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2022-05-02 13:23:04,479] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager)

[2022-05-02 13:23:04,483] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager)

[2022-05-02 13:23:04,489] INFO Loaded 0 logs in 10ms. (kafka.log.LogManager)

[2022-05-02 13:23:04,490] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)

[2022-05-02 13:23:04,493] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)

[2022-05-02 13:23:04,506] INFO Starting the log cleaner (kafka.log.LogCleaner)

[2022-05-02 13:23:04,556] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)

[2022-05-02 13:23:04,970] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)

[2022-05-02 13:23:04,974] INFO Awaiting socket connections on kafka:9092. (kafka.network.Acceptor)

[2022-05-02 13:23:05,005] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)

[2022-05-02 13:23:05,028] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Created data-plane acceptor and processors for endpoint : ListenerName(SASL_PLAINTEXT) (kafka.network.SocketServer)

[2022-05-02 13:23:05,063] INFO [broker-1001-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread)

[2022-05-02 13:23:05,085] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2022-05-02 13:23:05,086] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2022-05-02 13:23:05,086] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2022-05-02 13:23:05,087] INFO [ExpirationReaper-1001-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2022-05-02 13:23:05,102] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)

[2022-05-02 13:23:05,142] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)

[2022-05-02 13:23:05,165] INFO Stat of the created znode at /brokers/ids/1001 is: 28,28,1651497785157,1651497785157,1,0,0,72058993099931649,204,0,28

 (kafka.zk.KafkaZkClient)

[2022-05-02 13:23:05,166] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: SASL_PLAINTEXT://kafka:9092, czxid (broker epoch): 28 (kafka.zk.KafkaZkClient)

[2022-05-02 13:23:05,225] INFO [ControllerEventThread controllerId=1001] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)

[2022-05-02 13:23:05,232] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2022-05-02 13:23:05,238] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2022-05-02 13:23:05,239] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2022-05-02 13:23:05,245] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)

[2022-05-02 13:23:05,253] INFO [Controller id=1001] 1001 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController)

[2022-05-02 13:23:05,256] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)

[2022-05-02 13:23:05,258] INFO [Controller id=1001] Creating FeatureZNode at path: /feature with contents: FeatureZNode(Enabled,Features{}) (kafka.controller.KafkaController)

[2022-05-02 13:23:05,261] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)

[2022-05-02 13:23:05,262] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener)

[2022-05-02 13:23:05,292] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)

[2022-05-02 13:23:05,292] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)

[2022-05-02 13:23:05,298] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)

[2022-05-02 13:23:05,298] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)

[2022-05-02 13:23:05,300] INFO Updated cache from existing <empty> to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)

[2022-05-02 13:23:05,300] INFO [Controller id=1001] Registering handlers (kafka.controller.KafkaController)

[2022-05-02 13:23:05,304] INFO [Controller id=1001] Deleting log dir event notifications (kafka.controller.KafkaController)

[2022-05-02 13:23:05,308] INFO [Controller id=1001] Deleting isr change notifications (kafka.controller.KafkaController)

[2022-05-02 13:23:05,311] INFO [Controller id=1001] Initializing controller context (kafka.controller.KafkaController)

[2022-05-02 13:23:05,317] INFO [ZooKeeperClient ACL authorizer] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)

[2022-05-02 13:23:05,317] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@ef1695a (org.apache.zookeeper.ZooKeeper)

[2022-05-02 13:23:05,317] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)

[2022-05-02 13:23:05,317] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)

[2022-05-02 13:23:05,318] INFO [ZooKeeperClient ACL authorizer] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)

[2022-05-02 13:23:05,319] INFO Client successfully logged in. (org.apache.zookeeper.Login)

[2022-05-02 13:23:05,319] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)

[2022-05-02 13:23:05,320] INFO Opening socket connection to server zookeeper/172.30.0.4:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)

[2022-05-02 13:23:05,321] INFO Socket connection established, initiating session, client: /172.30.0.6:59604, server: zookeeper/172.30.0.4:2181 (org.apache.zookeeper.ClientCnxn)

[2022-05-02 13:23:05,325] INFO Session establishment complete on server zookeeper/172.30.0.4:2181, sessionid = 0x1000145be980002, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)

[2022-05-02 13:23:05,325] INFO [ZooKeeperClient ACL authorizer] Connected. (kafka.zookeeper.ZooKeeperClient)

[2022-05-02 13:23:05,332] INFO [Controller id=1001] Initialized broker epochs cache: HashMap(1001 -> 28) (kafka.controller.KafkaController)

[2022-05-02 13:23:05,338] DEBUG [Controller id=1001] Register BrokerModifications handler for Set(1001) (kafka.controller.KafkaController)

[2022-05-02 13:23:05,347] DEBUG [Channel manager on controller 1001]: Controller 1001 trying to connect to broker 1001 (kafka.controller.ControllerChannelManager)

[2022-05-02 13:23:05,357] INFO [RequestSendThread controllerId=1001] Starting (kafka.controller.RequestSendThread)

[2022-05-02 13:23:05,359] INFO [Controller id=1001] Currently active brokers in the cluster: Set(1001) (kafka.controller.KafkaController)

[2022-05-02 13:23:05,360] INFO [Controller id=1001] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController)

[2022-05-02 13:23:05,360] INFO [Controller id=1001] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController)

[2022-05-02 13:23:05,360] INFO [Controller id=1001] Fetching topic deletions in progress (kafka.controller.KafkaController)

[2022-05-02 13:23:05,363] INFO [Controller id=1001] List of topics to be deleted:  (kafka.controller.KafkaController)

[2022-05-02 13:23:05,364] INFO [Controller id=1001] List of topics ineligible for deletion:  (kafka.controller.KafkaController)

[2022-05-02 13:23:05,364] INFO [Controller id=1001] Initializing topic deletion manager (kafka.controller.KafkaController)

[2022-05-02 13:23:05,365] INFO [Topic Deletion Manager 1001] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager)

[2022-05-02 13:23:05,366] INFO [Controller id=1001] Sending update metadata request (kafka.controller.KafkaController)

[2022-05-02 13:23:05,369] INFO [Controller id=1001 epoch=1] Sending UpdateMetadata request to brokers HashSet(1001) for 0 partitions (state.change.logger)

[2022-05-02 13:23:05,380] INFO [/kafka-acl-changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)

[2022-05-02 13:23:05,380] INFO [ReplicaStateMachine controllerId=1001] Initializing replica state (kafka.controller.ZkReplicaStateMachine)

[2022-05-02 13:23:05,381] INFO [/kafka-acl-extended-changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)

[2022-05-02 13:23:05,382] INFO [ReplicaStateMachine controllerId=1001] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine)

[2022-05-02 13:23:05,385] INFO [ReplicaStateMachine controllerId=1001] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine)

[2022-05-02 13:23:05,386] DEBUG [ReplicaStateMachine controllerId=1001] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine)

[2022-05-02 13:23:05,386] INFO [PartitionStateMachine controllerId=1001] Initializing partition state (kafka.controller.ZkPartitionStateMachine)

[2022-05-02 13:23:05,387] INFO [PartitionStateMachine controllerId=1001] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine)

[2022-05-02 13:23:05,393] DEBUG [PartitionStateMachine controllerId=1001] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine)

[2022-05-02 13:23:05,394] INFO [Controller id=1001] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController)

[2022-05-02 13:23:05,402] INFO [Controller id=1001] Partitions undergoing preferred replica election:  (kafka.controller.KafkaController)

[2022-05-02 13:23:05,403] INFO [Controller id=1001] Partitions that completed preferred replica election:  (kafka.controller.KafkaController)

[2022-05-02 13:23:05,403] INFO [Controller id=1001] Skipping preferred replica election for partitions due to topic deletion:  (kafka.controller.KafkaController)

[2022-05-02 13:23:05,404] INFO [Controller id=1001] Resuming preferred replica election for partitions:  (kafka.controller.KafkaController)

[2022-05-02 13:23:05,405] INFO [Controller id=1001] Starting replica leader election (PREFERRED) for partitions  triggered by ZkTriggered (kafka.controller.KafkaController)

[2022-05-02 13:23:05,405] INFO User:admin (ldapext.security.authorization.SimpleLDAPAuthorizer)

[2022-05-02 13:23:05,417] INFO [Controller id=1001] Starting the controller scheduler (kafka.controller.KafkaController)

[2022-05-02 13:23:05,432] INFO [ExpirationReaper-1001-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2022-05-02 13:23:05,455] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)

[2022-05-02 13:23:05,468] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Starting socket server acceptors and processors (kafka.network.SocketServer)

[2022-05-02 13:23:05,473] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Started data-plane acceptor and processor(s) for endpoint : ListenerName(SASL_PLAINTEXT) (kafka.network.SocketServer)

[2022-05-02 13:23:05,473] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Started socket server acceptors and processors (kafka.network.SocketServer)

[2022-05-02 13:23:05,474] INFO Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser)

[2022-05-02 13:23:05,474] INFO Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser)

[2022-05-02 13:23:05,474] INFO Kafka startTimeMs: 1651497785473 (org.apache.kafka.common.utils.AppInfoParser)

[2022-05-02 13:23:05,476] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)

[2022-05-02 13:23:05,504] INFO [RequestSendThread controllerId=1001] Controller 1001 connected to kafka:9092 (id: 1001 rack: null) for sending state change requests (kafka.controller.RequestSendThread)

[2022-05-02 13:23:05,599] TRACE [Controller id=1001 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1001 rack: null) (state.change.logger)

[2022-05-02 13:23:05,677] INFO [broker-1001-to-controller-send-thread]: Recorded new controller, from now on will use broker kafka:9092 (id: 1001 rack: null) (kafka.server.BrokerToControllerRequestThread)

[2022-05-02 13:23:10,419] INFO [Controller id=1001] Processing automatic preferred replica leader election (kafka.controller.KafkaController)

[2022-05-02 13:23:10,420] TRACE [Controller id=1001] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)

[2022-05-02 13:28:10,425] INFO [Controller id=1001] Processing automatic preferred replica leader election (kafka.controller.KafkaController)

[2022-05-02 13:28:10,425] TRACE [Controller id=1001] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)

[2022-05-02 13:33:10,426] INFO [Controller id=1001] Processing automatic preferred replica leader election (kafka.controller.KafkaController)

[2022-05-02 13:33:10,426] TRACE [Controller id=1001] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)

[2022-05-02 13:38:10,428] INFO [Controller id=1001] Processing automatic preferred replica leader election (kafka.controller.KafkaController)

[2022-05-02 13:38:10,428] TRACE [Controller id=1001] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)

While when running in confluent locally, things run fine.. server.properties

broker.id=0
listeners=SASL_PLAINTEXT://localhost:9092

# Security
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN

listener.name.sasl_plaintext.plain.sasl.server.callback.handler.class=\
  ext.security.authentication.SimpleXYZAuthentication

authorizer.class.name=ext.security.authorization.SimpleXYZAuthorizer

super.users=User:admin

zookeeper.properties

dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0
# Disable the adminserver by default to avoid port conflicts.
# Set the port to something non-conflicting if choosing to enable this
admin.enableServer=false
# admin.serverPort=8080
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

Kindly let me know what is being wrong here.

OneCricketeer commented 2 years ago

docker exec into the containers and compare the properties there vs what works locally

AntonSmolkov commented 1 year ago

@andrewegel Hi. The same here. Version 7.4.0. KAFKA_LISTENER_NAME_PASSW___SASL___SSL_SASL_ENABLED_MECHANISMS is converted to listener.name.passw-sasl-ssl.sasl.enabled.mechanisms. Got dash instead of underscore, checked via docker exec:

AntonSmolkov commented 1 year ago

This is especially inconvenient because this part of code https://github.com/confluentinc/kafka-images/blob/4c7e5db74fa87808045cef65c34b20d5ea5b45f5/server/include/etc/confluent/docker/configure#L124 expects underscore in the listeners' name

AntonSmolkov commented 1 year ago

Checked double underscore __, and surprisingly it was successfully templated to single one. Looks like there is a bug in the documentation. It should have been:

Convert to upper-case.
Separate each word with _.
Replace a period (.) with a single underscore (_).
*Replace an underscore (_) with double underscores (__).*
*Replace a dash (-) with triple underscores (___).*