bitnami / containers

Bitnami container images
https://bitnami.com
Other
3.41k stars 4.87k forks source link

[bitnami/kafka] SSL Handshake Failure from kafka-console-producer.sh #63480

Closed jamesacris closed 7 months ago

jamesacris commented 8 months ago

Name and Version

bitnami/kafka:3.6

What architecture are you using?

amd64

What steps will reproduce the bug?

Using this docker-compose.yml:

version: "3"
services:
  kafka:
    image: 'bitnami/kafka:3.6'
    ports:
      - '9092:9092'
    volumes:
      - ./server.properties:/bitnami/kafka/config/server.properties
      - ./internal_client.properties:/bitnami/kafka/config/producer.properties
      - ./internal_client.properties:/bitnami/kafka/config/consumer.properties
      - ./keystore.pem:/opt/bitnami/kafka/config/certs/keystore.pem:ro
      - ./CA.pem:/opt/bitnami/kafka/config/certs/CA.pem:ro

which uses the following server.properties:

# Kraft settings
process.roles=broker,controller
node.id=1
controller.quorum.voters=1@localhost:9093

# Listener settings
listeners=SSL://:9092,CONTROLLER://:9093
#inter.broker.listener.name=SSL
advertised.listeners=SSL://localhost:9092
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# TLS/SSL settings
security.protocol=SSL
security.inter.broker.protocol=SSL
ssl.keystore.type=PEM
ssl.keystore.location=/opt/bitnami/kafka/config/certs/keystore.pem
ssl.key.password=<hidden>
ssl.truststore.type=PEM
ssl.truststore.location=/opt/bitnami/kafka/config/certs/CA.pem
ssl.client.auth=none
ssl.endpoint.identification.algorithm=
producer.ssl.endpoint.identification.algorithm=
consumer.ssl.endpoint.identification.algorithm=

# Performance settings
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kraft-combined-logs
num.partitions=1
num.recovery.threads.per.data.dir=1

# Replication factor settings
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

# Log retention settings
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000

The producer and consumer properties are set in internal_client.properties:

security.protocol=SSL
ssl.keystore.type=PEM
ssl.keystore.location=/opt/bitnami/kafka/config/certs/keystore.pem
ssl.key.password=<hidden>
ssl.truststore.type=PEM
ssl.truststore.location=/opt/bitnami/kafka/config/certs/CA.pem

In this case keystore.pem contains the server certificate and private key. This was signed by my organisation's CA, the trust chain for which is present in CA.pem.

What is the expected behavior?

Docker container starts up ok, and I can use kafka-console-producer.sh and kafka-console-consumer.sh inside the container with the above producer.properties and consumer.properties to interact with topics on the command line.

What do you see instead?

The container starts up ok. When I try to run kafka-console-producer.sh I get an error:

$ kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-tls --producer.config /opt/bitnami/kafka/config/producer.properties
>[2024-03-01 10:54:25,231] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2024-03-01 10:54:25,232] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2024-03-01 10:54:25,366] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
^C

Accompanied by errors in the container logs:

[2024-03-01 10:54:25,228] INFO [SocketServer listenerType=BROKER, nodeId=1] Failed authentication with /127.0.0.1 (channelId=127.0.0.1:9092-127.0.0.1:53984-0) (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-01 10:54:25,387] INFO [SocketServer listenerType=BROKER, nodeId=1] Failed authentication with /127.0.0.1 (channelId=127.0.0.1:9092-127.0.0.1:53998-0) (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-01 10:54:25,805] INFO [SocketServer listenerType=BROKER, nodeId=1] Failed authentication with /127.0.0.1 (channelId=127.0.0.1:9092-127.0.0.1:54010-0) (SSL handshake failed) (org.apache.kafka.common.network.Selector)

Which doesn't tell me the root cause of the problem, like a problem with my certificates, or my configuration, only that the SSL handshake failed.

Additional information

If I run a connection test with openssl s_client -cert keystore.pem -CAfile CA.pem -debug -connect localhost:9092, I get the following in the response, as well as the server certificate and other outputs:

SSL handshake has read 1701 bytes and written 377 bytes
Verification: OK
---
Verify return code: 0 (ok)

The full container logs are as follows:

kafka 10:30:48.89 INFO  ==>
kafka 10:30:48.89 INFO  ==> Welcome to the Bitnami kafka container
kafka 10:30:48.89 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers
kafka 10:30:48.89 INFO  ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
kafka 10:30:48.89 INFO  ==>
kafka 10:30:48.89 INFO  ==> ** Starting Kafka setup **
kafka 10:30:48.94 INFO  ==> Initializing KRaft storage metadata
kafka 10:30:48.94 WARN  ==> KAFKA_KRAFT_CLUSTER_ID not set - If using multiple nodes then you must use the same Cluster ID for each one
kafka 10:30:50.01 INFO  ==> Generated Kafka cluster ID 'Yt-D0rsjQnaOh3r6COffgQ'
kafka 10:30:50.01 INFO  ==> Formatting storage directories to add metadata...
All of the log directories are already formatted.
kafka 10:30:51.30 INFO  ==> ** Kafka setup finished! **

kafka 10:30:51.31 INFO  ==> ** Starting Kafka **
[2024-03-01 10:30:51,929] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2024-03-01 10:30:52,181] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2024-03-01 10:30:52,306] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-03-01 10:30:52,308] INFO [ControllerServer id=1] Starting controller (kafka.server.ControllerServer)
[2024-03-01 10:30:52,326] INFO authorizerStart completed for endpoint CONTROLLER. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-03-01 10:30:52,656] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-03-01 10:30:52,692] INFO [SocketServer listenerType=CONTROLLER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(CONTROLLER) (kafka.network.SocketServer)
[2024-03-01 10:30:52,694] INFO [SharedServer id=1] Starting SharedServer (kafka.server.SharedServer)
[2024-03-01 10:30:52,742] INFO [LogLoader partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Recovering unflushed segment 0. 0/1 recovered for __cluster_metadata-0. (kafka.log.LogLoader)
[2024-03-01 10:30:52,743] INFO [LogLoader partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$)
[2024-03-01 10:30:52,744] INFO [LogLoader partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$)
[2024-03-01 10:30:52,744] INFO Deleted producer state snapshot /tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000000017.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile)
[2024-03-01 10:30:52,744] INFO [LogLoader partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 0 (kafka.log.UnifiedLog$)
[2024-03-01 10:30:52,766] INFO [ProducerStateManager partition=__cluster_metadata-0]Wrote producer snapshot at offset 17 with 0 producer ids in 4 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager)
[2024-03-01 10:30:52,777] INFO [LogLoader partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Loading producer state till offset 17 with message format version 2 (kafka.log.UnifiedLog$)
[2024-03-01 10:30:52,777] INFO [LogLoader partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Reloading from producer snapshot and rebuilding producer state from offset 17 (kafka.log.UnifiedLog$)
[2024-03-01 10:30:52,777] INFO [ProducerStateManager partition=__cluster_metadata-0]Loading producer state from snapshot file 'SnapshotFile(offset=17, file=/tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000000017.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager)
[2024-03-01 10:30:52,778] INFO [LogLoader partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 17 (kafka.log.UnifiedLog$)
[2024-03-01 10:30:52,808] INFO Initialized snapshots with IDs Set() from /tmp/kraft-combined-logs/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$)
[2024-03-01 10:30:52,827] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2024-03-01 10:30:52,972] INFO [RaftManager id=1] Completed transition to ResignedState(localId=1, epoch=1, voters=[1], electionTimeoutMs=1567, unackedVoters=[], preferredSuccessors=[]) from null (org.apache.kafka.raft.QuorumState)
[2024-03-01 10:30:52,975] INFO [RaftManager id=1] Completed transition to CandidateState(localId=1, epoch=2, retries=1, voteStates={1=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1776) from ResignedState(localId=1, epoch=1, voters=[1], electionTimeoutMs=1567, unackedVoters=[], preferredSuccessors=[]) (org.apache.kafka.raft.QuorumState)
[2024-03-01 10:30:52,979] INFO [RaftManager id=1] Completed transition to Leader(localId=1, epoch=2, epochStartOffset=17, highWatermark=Optional.empty, voterStates={1=ReplicaState(nodeId=1, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) from CandidateState(localId=1, epoch=2, retries=1, voteStates={1=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1776) (org.apache.kafka.raft.QuorumState)
[2024-03-01 10:30:52,990] INFO [kafka-1-raft-outbound-request-thread]: Starting (kafka.raft.RaftSendThread)
[2024-03-01 10:30:52,990] INFO [kafka-1-raft-io-thread]: Starting (kafka.raft.KafkaRaftManager$RaftIoThread)
[2024-03-01 10:30:53,013] INFO [RaftManager id=1] High watermark set to LogOffsetMetadata(offset=18, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=1237)]) for the first time for epoch 2 based on indexOfHw 0 and voters [ReplicaState(nodeId=1, endOffset=Optional[LogOffsetMetadata(offset=18, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=1237)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)] (org.apache.kafka.raft.LeaderState)
[2024-03-01 10:30:53,014] INFO [MetadataLoader id=1] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 18 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,015] INFO [ControllerServer id=1] Waiting for controller quorum voters future (kafka.server.ControllerServer)
[2024-03-01 10:30:53,016] INFO [ControllerServer id=1] Finished waiting for controller quorum voters future (kafka.server.ControllerServer)
[2024-03-01 10:30:53,026] INFO [RaftManager id=1] Registered the listener org.apache.kafka.image.loader.MetadataLoader@1516470837 (org.apache.kafka.raft.KafkaRaftClient)
[2024-03-01 10:30:53,034] INFO [MetadataLoader id=1] maybePublishMetadata(LOG_DELTA): The loader is still catching up because we have loaded up to offset 0, but the high water mark is 18 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,045] INFO [MetadataLoader id=1] maybePublishMetadata(LOG_DELTA): The loader finished catching up to the current high water mark of 18 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,048] INFO [QuorumController id=1] Creating new QuorumController with clusterId uz_bu6KaRriLjudbKZTsaQ. (org.apache.kafka.controller.QuorumController)
[2024-03-01 10:30:53,049] INFO [RaftManager id=1] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@1201137727 (org.apache.kafka.raft.KafkaRaftClient)
[2024-03-01 10:30:53,051] INFO [QuorumController id=1] Replayed BeginTransactionRecord(name='Bootstrap records') at offset 1. (org.apache.kafka.controller.OffsetControlManager)
[2024-03-01 10:30:53,051] INFO [QuorumController id=1] Replayed a FeatureLevelRecord setting metadata version to 3.6-IV2 (org.apache.kafka.controller.FeatureControlManager)
[2024-03-01 10:30:53,051] INFO [QuorumController id=1] Replayed EndTransactionRecord() at offset 4. (org.apache.kafka.controller.OffsetControlManager)
[2024-03-01 10:30:53,053] INFO [QuorumController id=1] Replayed initial RegisterBrokerRecord for broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=gS6KIxlJTvWKMY8IdGLXfg, brokerEpoch=6, endPoints=[BrokerEndpoint(name='SSL', host='localhost', port=9092, securityProtocol=1)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=14)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager)
[2024-03-01 10:30:53,053] INFO [QuorumController id=1] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=6, fenced=-1, inControlledShutdown=0) (org.apache.kafka.controller.ClusterControlManager)
[2024-03-01 10:30:53,054] INFO [controller-1-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-01 10:30:53,055] INFO [QuorumController id=1] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=6, fenced=1, inControlledShutdown=0) (org.apache.kafka.controller.ClusterControlManager)
[2024-03-01 10:30:53,055] INFO [QuorumController id=1] Becoming the active controller at epoch 2, next write offset 18. (org.apache.kafka.controller.QuorumController)
[2024-03-01 10:30:53,055] INFO [controller-1-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-01 10:30:53,056] INFO [controller-1-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-01 10:30:53,057] INFO [controller-1-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-01 10:30:53,059] WARN [QuorumController id=1] Performing controller activation. Loaded ZK migration state of NONE. (org.apache.kafka.controller.QuorumController)
[2024-03-01 10:30:53,072] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-01 10:30:53,079] INFO [ControllerServer id=1] Waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer)
[2024-03-01 10:30:53,079] INFO [ControllerServer id=1] Finished waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer)
[2024-03-01 10:30:53,080] INFO [SocketServer listenerType=CONTROLLER, nodeId=1] Enabling request processing. (kafka.network.SocketServer)
[2024-03-01 10:30:53,082] INFO [MetadataLoader id=1] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 17 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,082] INFO [MetadataLoader id=1] InitializeNewPublishers: initializing FeaturesPublisher with a snapshot at offset 17 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,083] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.DataPlaneAcceptor)
[2024-03-01 10:30:53,087] INFO [MetadataLoader id=1] InitializeNewPublishers: initializing DynamicConfigPublisher controller id=1 with a snapshot at offset 17 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,090] INFO [MetadataLoader id=1] InitializeNewPublishers: initializing DynamicClientQuotaPublisher controller id=1 with a snapshot at offset 17 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,091] INFO [MetadataLoader id=1] InitializeNewPublishers: initializing ScramPublisher controller id=1 with a snapshot at offset 17 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,092] INFO [MetadataLoader id=1] InitializeNewPublishers: initializing DelegationTokenPublisher controller id=1 with a snapshot at offset 17 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,093] INFO [MetadataLoader id=1] InitializeNewPublishers: initializing ControllerMetadataMetricsPublisher with a snapshot at offset 17 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,093] INFO [MetadataLoader id=1] InitializeNewPublishers: initializing AclPublisher controller id=1 with a snapshot at offset 17 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,098] INFO [ControllerServer id=1] Waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer)
[2024-03-01 10:30:53,098] INFO [ControllerServer id=1] Finished waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer)
[2024-03-01 10:30:53,098] INFO [ControllerServer id=1] Waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2024-03-01 10:30:53,098] INFO [ControllerServer id=1] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2024-03-01 10:30:53,100] INFO [BrokerServer id=1] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer)
[2024-03-01 10:30:53,100] INFO [BrokerServer id=1] Starting broker (kafka.server.BrokerServer)
[2024-03-01 10:30:53,109] INFO [broker-1-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-01 10:30:53,109] INFO [broker-1-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-01 10:30:53,109] INFO [broker-1-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-01 10:30:53,110] INFO [broker-1-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-01 10:30:53,123] INFO [BrokerServer id=1] Waiting for controller quorum voters future (kafka.server.BrokerServer)
[2024-03-01 10:30:53,123] INFO [BrokerServer id=1] Finished waiting for controller quorum voters future (kafka.server.BrokerServer)
[2024-03-01 10:30:53,128] INFO [broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread)
[2024-03-01 10:30:53,129] INFO [broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2024-03-01 10:30:53,169] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-03-01 10:30:53,675] INFO [SocketServer listenerType=BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(SSL) (kafka.network.SocketServer)
[2024-03-01 10:30:53,680] INFO [broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread)
[2024-03-01 10:30:53,680] INFO [broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2024-03-01 10:30:53,711] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-01 10:30:53,723] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-01 10:30:53,723] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-01 10:30:53,745] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-01 10:30:53,755] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-01 10:30:53,759] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-01 10:30:53,760] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-01 10:30:53,802] INFO [BrokerLifecycleManager id=1] Incarnation aXlqUkpZRcW174tUagHgHQ of broker 1 in cluster uz_bu6KaRriLjudbKZTsaQ is now STARTING. (kafka.server.BrokerLifecycleManager)
[2024-03-01 10:30:53,803] INFO [broker-1-to-controller-heartbeat-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread)
[2024-03-01 10:30:53,803] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2024-03-01 10:30:53,832] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-01 10:30:53,881] INFO [QuorumController id=1] Replayed RegisterBrokerRecord establishing a new incarnation of broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=aXlqUkpZRcW174tUagHgHQ, brokerEpoch=19, endPoints=[BrokerEndpoint(name='SSL', host='localhost', port=9092, securityProtocol=1)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=14)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager)
[2024-03-01 10:30:53,882] INFO [BrokerServer id=1] Waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer)
[2024-03-01 10:30:53,883] INFO [BrokerServer id=1] Finished waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer)
[2024-03-01 10:30:53,883] INFO [BrokerServer id=1] Waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-03-01 10:30:53,883] INFO [MetadataLoader id=1] InitializeNewPublishers: initializing BrokerMetadataPublisher with a snapshot at offset 18 (org.apache.kafka.image.loader.MetadataLoader)
[2024-03-01 10:30:53,884] INFO [BrokerMetadataPublisher id=1] Publishing initial metadata at offset OffsetAndEpoch(offset=18, epoch=2) with metadata.version 3.6-IV2. (kafka.server.metadata.BrokerMetadataPublisher)
[2024-03-01 10:30:53,884] INFO Loading logs from log dirs ArrayBuffer(/tmp/kraft-combined-logs) (kafka.log.LogManager)
[2024-03-01 10:30:53,888] INFO No logs found to be loaded in /tmp/kraft-combined-logs (kafka.log.LogManager)
[2024-03-01 10:30:53,899] INFO Loaded 0 logs in 14ms (kafka.log.LogManager)
[2024-03-01 10:30:53,899] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2024-03-01 10:30:53,900] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2024-03-01 10:30:53,915] INFO [BrokerLifecycleManager id=1] Successfully registered broker 1 with broker epoch 19 (kafka.server.BrokerLifecycleManager)
[2024-03-01 10:30:53,922] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2024-03-01 10:30:53,943] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread)
[2024-03-01 10:30:53,947] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-01 10:30:53,948] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2024-03-01 10:30:53,949] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager)
[2024-03-01 10:30:53,950] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2024-03-01 10:30:53,951] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-01 10:30:53,953] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-01 10:30:53,953] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-01 10:30:53,954] INFO [BrokerMetadataPublisher id=1] Updating metadata.version to 14 at offset OffsetAndEpoch(offset=18, epoch=2). (kafka.server.metadata.BrokerMetadataPublisher)
[2024-03-01 10:30:53,982] INFO [BrokerLifecycleManager id=1] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-03-01 10:30:53,982] INFO [BrokerServer id=1] Finished waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-03-01 10:30:53,982] INFO [BrokerServer id=1] Waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-03-01 10:30:53,982] INFO [BrokerServer id=1] Finished waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-03-01 10:30:53,984] INFO KafkaConfig values:
    advertised.listeners = SSL://localhost:9092
    alter.config.policy.class.name = null
    alter.log.dirs.replication.quota.window.num = 11
    alter.log.dirs.replication.quota.window.size.seconds = 1
    authorizer.class.name =
    auto.create.topics.enable = true
    auto.include.jmx.reporter = true
    auto.leader.rebalance.enable = true
    background.threads = 10
    broker.heartbeat.interval.ms = 2000
    broker.id = 1
    broker.id.generation.enable = true
    broker.rack = null
    broker.session.timeout.ms = 9000
    client.quota.callback.class = null
    compression.type = producer
    connection.failed.authentication.delay.ms = 100
    connections.max.idle.ms = 600000
    connections.max.reauth.ms = 0
    control.plane.listener.name = null
    controlled.shutdown.enable = true
    controlled.shutdown.max.retries = 3
    controlled.shutdown.retry.backoff.ms = 5000
    controller.listener.names = CONTROLLER
    controller.quorum.append.linger.ms = 25
    controller.quorum.election.backoff.max.ms = 1000
    controller.quorum.election.timeout.ms = 1000
    controller.quorum.fetch.timeout.ms = 2000
    controller.quorum.request.timeout.ms = 2000
    controller.quorum.retry.backoff.ms = 20
    controller.quorum.voters = [1@localhost:9093]
    controller.quota.window.num = 11
    controller.quota.window.size.seconds = 1
    controller.socket.timeout.ms = 30000
    create.topic.policy.class.name = null
    default.replication.factor = 1
    delegation.token.expiry.check.interval.ms = 3600000
    delegation.token.expiry.time.ms = 86400000
    delegation.token.master.key = null
    delegation.token.max.lifetime.ms = 604800000
    delegation.token.secret.key = null
    delete.records.purgatory.purge.interval.requests = 1
    delete.topic.enable = true
    early.start.listeners = null
    fetch.max.bytes = 57671680
    fetch.purgatory.purge.interval.requests = 1000
    group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor]
    group.consumer.heartbeat.interval.ms = 5000
    group.consumer.max.heartbeat.interval.ms = 15000
    group.consumer.max.session.timeout.ms = 60000
    group.consumer.max.size = 2147483647
    group.consumer.min.heartbeat.interval.ms = 5000
    group.consumer.min.session.timeout.ms = 45000
    group.consumer.session.timeout.ms = 45000
    group.coordinator.new.enable = false
    group.coordinator.threads = 1
    group.initial.rebalance.delay.ms = 3000
    group.max.session.timeout.ms = 1800000
    group.max.size = 2147483647
    group.min.session.timeout.ms = 6000
    initial.broker.registration.timeout.ms = 60000
    inter.broker.listener.name = null
    inter.broker.protocol.version = 3.6-IV2
    kafka.metrics.polling.interval.secs = 10
    kafka.metrics.reporters = []
    leader.imbalance.check.interval.seconds = 300
    leader.imbalance.per.broker.percentage = 10
    listener.security.protocol.map = CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    listeners = SSL://:9092,CONTROLLER://:9093
    log.cleaner.backoff.ms = 15000
    log.cleaner.dedupe.buffer.size = 134217728
    log.cleaner.delete.retention.ms = 86400000
    log.cleaner.enable = true
    log.cleaner.io.buffer.load.factor = 0.9
    log.cleaner.io.buffer.size = 524288
    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    log.cleaner.max.compaction.lag.ms = 9223372036854775807
    log.cleaner.min.cleanable.ratio = 0.5
    log.cleaner.min.compaction.lag.ms = 0
    log.cleaner.threads = 1
    log.cleanup.policy = [delete]
    log.dir = /tmp/kafka-logs
    log.dirs = /tmp/kraft-combined-logs
    log.flush.interval.messages = 9223372036854775807
    log.flush.interval.ms = null
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    log.flush.start.offset.checkpoint.interval.ms = 60000
    log.index.interval.bytes = 4096
    log.index.size.max.bytes = 10485760
    log.local.retention.bytes = -2
    log.local.retention.ms = -2
    log.message.downconversion.enable = true
    log.message.format.version = 3.0-IV1
    log.message.timestamp.after.max.ms = 9223372036854775807
    log.message.timestamp.before.max.ms = 9223372036854775807
    log.message.timestamp.difference.max.ms = 9223372036854775807
    log.message.timestamp.type = CreateTime
    log.preallocate = false
    log.retention.bytes = -1
    log.retention.check.interval.ms = 300000
    log.retention.hours = 168
    log.retention.minutes = null
    log.retention.ms = null
    log.roll.hours = 168
    log.roll.jitter.hours = 0
    log.roll.jitter.ms = null
    log.roll.ms = null
    log.segment.bytes = 1073741824
    log.segment.delete.delay.ms = 60000
    max.connection.creation.rate = 2147483647
    max.connections = 2147483647
    max.connections.per.ip = 2147483647
    max.connections.per.ip.overrides =
    max.incremental.fetch.session.cache.slots = 1000
    message.max.bytes = 1048588
    metadata.log.dir = null
    metadata.log.max.record.bytes.between.snapshots = 20971520
    metadata.log.max.snapshot.interval.ms = 3600000
    metadata.log.segment.bytes = 1073741824
    metadata.log.segment.min.bytes = 8388608
    metadata.log.segment.ms = 604800000
    metadata.max.idle.interval.ms = 500
    metadata.max.retention.bytes = 104857600
    metadata.max.retention.ms = 604800000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    min.insync.replicas = 1
    node.id = 1
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.alter.log.dirs.threads = null
    num.replica.fetchers = 1
    offset.metadata.max.bytes = 4096
    offsets.commit.required.acks = -1
    offsets.commit.timeout.ms = 5000
    offsets.load.buffer.size = 5242880
    offsets.retention.check.interval.ms = 600000
    offsets.retention.minutes = 10080
    offsets.topic.compression.codec = 0
    offsets.topic.num.partitions = 50
    offsets.topic.replication.factor = 1
    offsets.topic.segment.bytes = 104857600
    password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
    password.encoder.iterations = 4096
    password.encoder.key.length = 128
    password.encoder.keyfactory.algorithm = null
    password.encoder.old.secret = null
    password.encoder.secret = null
    principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
    process.roles = [broker, controller]
    producer.id.expiration.check.interval.ms = 600000
    producer.id.expiration.ms = 86400000
    producer.purgatory.purge.interval.requests = 1000
    queued.max.request.bytes = -1
    queued.max.requests = 500
    quota.window.num = 11
    quota.window.size.seconds = 1
    remote.log.index.file.cache.total.size.bytes = 1073741824
    remote.log.manager.task.interval.ms = 30000
    remote.log.manager.task.retry.backoff.max.ms = 30000
    remote.log.manager.task.retry.backoff.ms = 500
    remote.log.manager.task.retry.jitter = 0.2
    remote.log.manager.thread.pool.size = 10
    remote.log.metadata.custom.metadata.max.bytes = 128
    remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
    remote.log.metadata.manager.class.path = null
    remote.log.metadata.manager.impl.prefix = rlmm.config.
    remote.log.metadata.manager.listener.name = null
    remote.log.reader.max.pending.tasks = 100
    remote.log.reader.threads = 10
    remote.log.storage.manager.class.name = null
    remote.log.storage.manager.class.path = null
    remote.log.storage.manager.impl.prefix = rsm.config.
    remote.log.storage.system.enable = false
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 30000
    replica.selector.class = null
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 30000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    reserved.broker.max.id = 1000
    sasl.client.callback.handler.class = null
    sasl.enabled.mechanisms = [GSSAPI]
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.principal.to.local.rules = [DEFAULT]
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.connect.timeout.ms = null
    sasl.login.read.timeout.ms = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.login.retry.backoff.max.ms = 10000
    sasl.login.retry.backoff.ms = 100
    sasl.mechanism.controller.protocol = GSSAPI
    sasl.mechanism.inter.broker.protocol = GSSAPI
    sasl.oauthbearer.clock.skew.seconds = 30
    sasl.oauthbearer.expected.audience = null
    sasl.oauthbearer.expected.issuer = null
    sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
    sasl.oauthbearer.jwks.endpoint.url = null
    sasl.oauthbearer.scope.claim.name = scope
    sasl.oauthbearer.sub.claim.name = sub
    sasl.oauthbearer.token.endpoint.url = null
    sasl.server.callback.handler.class = null
    sasl.server.max.receive.size = 524288
    security.inter.broker.protocol = SSL
    security.providers = null
    server.max.startup.time.ms = 9223372036854775807
    socket.connection.setup.timeout.max.ms = 30000
    socket.connection.setup.timeout.ms = 10000
    socket.listen.backlog.size = 50
    socket.receive.buffer.bytes = 102400
    socket.request.max.bytes = 104857600
    socket.send.buffer.bytes = 102400
    ssl.cipher.suites = []
    ssl.client.auth = none
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm =
    ssl.engine.factory.class = null
    ssl.key.password = [hidden]
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = /opt/bitnami/kafka/config/certs/keystore.pem
    ssl.keystore.password = null
    ssl.keystore.type = PEM
    ssl.principal.mapping.rules = DEFAULT
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = /opt/bitnami/kafka/config/certs/CA.pem
    ssl.truststore.password = null
    ssl.truststore.type = PEM
    transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
    transaction.max.timeout.ms = 900000
    transaction.partition.verification.enable = true
    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    transaction.state.log.load.buffer.size = 5242880
    transaction.state.log.min.isr = 1
    transaction.state.log.num.partitions = 50
    transaction.state.log.replication.factor = 1
    transaction.state.log.segment.bytes = 104857600
    transactional.id.expiration.ms = 604800000
    unclean.leader.election.enable = false
    unstable.api.versions.enable = false
    zookeeper.clientCnxnSocket = null
    zookeeper.connect = null
    zookeeper.connection.timeout.ms = null
    zookeeper.max.in.flight.requests = 10
    zookeeper.metadata.migration.enable = false
    zookeeper.session.timeout.ms = 18000
    zookeeper.set.acl = false
    zookeeper.ssl.cipher.suites = null
    zookeeper.ssl.client.enable = false
    zookeeper.ssl.crl.enable = false
    zookeeper.ssl.enabled.protocols = null
    zookeeper.ssl.endpoint.identification.algorithm = HTTPS
    zookeeper.ssl.keystore.location = null
    zookeeper.ssl.keystore.password = null
    zookeeper.ssl.keystore.type = null
    zookeeper.ssl.ocsp.enable = false
    zookeeper.ssl.protocol = TLSv1.2
    zookeeper.ssl.truststore.location = null
    zookeeper.ssl.truststore.password = null
    zookeeper.ssl.truststore.type = null
 (kafka.server.KafkaConfig)
[2024-03-01 10:30:53,990] INFO [BrokerServer id=1] Waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-03-01 10:30:53,991] INFO [QuorumController id=1] The request from broker 1 to unfence has been granted because it has caught up with the offset of its register broker record 19. (org.apache.kafka.controller.BrokerHeartbeatManager)
[2024-03-01 10:30:53,993] INFO [QuorumController id=1] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=19, fenced=-1, inControlledShutdown=0) (org.apache.kafka.controller.ClusterControlManager)
[2024-03-01 10:30:54,021] INFO [BrokerLifecycleManager id=1] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2024-03-01 10:30:54,021] INFO [BrokerServer id=1] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-03-01 10:30:54,022] INFO authorizerStart completed for endpoint SSL. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-03-01 10:30:54,022] INFO [SocketServer listenerType=BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer)
[2024-03-01 10:30:54,022] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor)
[2024-03-01 10:30:54,023] INFO [BrokerServer id=1] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-03-01 10:30:54,023] INFO [BrokerServer id=1] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-03-01 10:30:54,023] INFO [BrokerServer id=1] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-03-01 10:30:54,023] INFO [BrokerServer id=1] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-03-01 10:30:54,023] INFO [BrokerServer id=1] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2024-03-01 10:30:54,024] INFO Kafka version: 3.6.1 (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-01 10:30:54,024] INFO Kafka commitId: 5e3c2b738d253ff5 (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-01 10:30:54,024] INFO Kafka startTimeMs: 1709289054023 (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-01 10:30:54,025] INFO [KafkaRaftServer nodeId=1] Kafka Server started (kafka.server.KafkaRaftServer)
[2024-03-01 10:54:25,228] INFO [SocketServer listenerType=BROKER, nodeId=1] Failed authentication with /127.0.0.1 (channelId=127.0.0.1:9092-127.0.0.1:53984-0) (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-01 10:54:25,387] INFO [SocketServer listenerType=BROKER, nodeId=1] Failed authentication with /127.0.0.1 (channelId=127.0.0.1:9092-127.0.0.1:53998-0) (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-01 10:54:25,805] INFO [SocketServer listenerType=BROKER, nodeId=1] Failed authentication with /127.0.0.1 (channelId=127.0.0.1:9092-127.0.0.1:54010-0) (SSL handshake failed) (org.apache.kafka.common.network.Selector)
jotamartos commented 8 months ago

Hi @jamesacris,

As you are mounting custom configuration files to the docker image, I do not know if there is any missing configuration there. I suggest you follow this section of the README to check if the deployment works as expected

https://github.com/bitnami/containers/tree/main/bitnami/kafka#security

If that's the case, you can compare the configuration files between this deployment and the one you have. As an alternative, you can take look at the Kafka's documentation and forums to know if there is any missing configuration parameter in your conf files.

Thanks

github-actions[bot] commented 7 months ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 7 months ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.