openimsdk / helm-charts

helm charts repository for openim
https://openimsdk.github.io/helm-charts/
Apache License 2.0
14 stars 10 forks source link

Docs: <description>The Kafka pods deployed via Helm are stuck in initialization and cannot start properly. #84

Closed open-yuhaoz closed 4 months ago

open-yuhaoz commented 5 months ago

Description

Hello, my environment is kubernetes 1.23.6. $ cenots 7 $ docker 20.10.0 $ kubernetes 1.23.6 $ network calico $ CPU 4 $ Memory 8G $ NFS storageClass 500G When the process of deploying openim runs the service kafka. $ Helm install im-kafka infra/kafka- f infra/kafka-config.yaml-n openim.

Pod has been initializing, jammed, and can't run.

Screenshots

微信图片_20240221163251 微信图片_20240221163910 微信图片_20240221163918 微信图片_20240221163925

Additional information

No response

cubxxw commented 5 months ago

/assign

openimbot commented 5 months ago

@cubxxw Glad to see you accepted this issue🤲, this issue has been assigned to you. I set the milestones for this issue to , We are looking forward to your PR!

cubxxw commented 5 months ago

The reason for the actual failure of the container is not specified in the output provided. However, common reasons for a container to fail include:

Application errors within the container. Misconfiguration of the container or the Pod. Resource constraints, such as insufficient memory or CPU. Problems with the container image itself. To diagnose the issue further, you would typically:

Inspect the logs of the failed container using kubectl logs [pod-name]. Check the resource requests and limits set for the Pod or container. Review the configuration files (YAML) for any potential misconfigurations. Ensure that the container image is accessible and correctly tagged. Look into the health probes (liveness and readiness probes) to see if they are failing.

open-yuhaoz commented 5 months ago

Hello, cubxxw. These are the kubectl logs content pod resource configuration CPU is 4 cores and 8 GB of running memory, I can not see the specific reasons for the failure

[2024-02-22 01:21:14,395] INFO [broker-0-to-controller-heartbeat-channel-manager]: Shutting down (kafka.server.BrokerToControllerRequestThread) [2024-02-22 01:21:14,395] ERROR [BrokerServer id=0] Received a fatal error while waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer) java.util.concurrent.CancellationException at java.base/java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2478) at kafka.server.BrokerLifecycleManager$ShutdownEvent.run(BrokerLifecycleManager.scala:506) at org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:186) at java.base/java.lang.Thread.run(Thread.java:833) [2024-02-22 01:21:14,395] INFO [broker-0-to-controller-heartbeat-channel-manager]: Shutdown completed (kafka.server.BrokerToControllerRequestThread)

[2024-02-22 01:21:14,396] INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer) [2024-02-22 01:21:14,398] ERROR [BrokerServer id=0] Fatal error during broker startup. Prepare to shutdown (kafka.server.BrokerServer) java.lang.RuntimeException: Received a fatal error while waiting for the controller to acknowledge that we are caught up at org.apache.kafka.server.util.FutureUtils.waitWithLogging(FutureUtils.java:68) at kafka.server.BrokerServer.startup(BrokerServer.scala:452) at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:96) at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:96) at scala.Option.foreach(Option.scala:407) at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:96) at kafka.Kafka$.main(Kafka.scala:113) at kafka.Kafka.main(Kafka.scala) Caused by: java.util.concurrent.CancellationException at java.base/java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2478) at kafka.server.BrokerLifecycleManager$ShutdownEvent.run(BrokerLifecycleManager.scala:506) at org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:186) at java.base/java.lang.Thread.run(Thread.java:833) [2024-02-22 01:21:14,398] INFO [BrokerServer id=0] Transition from STARTED to SHUTTING_DOWN (kafka.server.BrokerServer) [2024-02-22 01:21:14,399] INFO [BrokerServer id=0] shutting down (kafka.server.BrokerServer) [2024-02-22 01:21:14,399] INFO [SocketServer listenerType=BROKER, nodeId=0] Stopping socket server request processors (kafka.network.SocketServer) [2024-02-22 01:21:14,403] INFO Broker to controller channel manager for heartbeat shutdown (kafka.server.BrokerToControllerChannelManagerImpl) [2024-02-22 01:21:14,405] INFO [SocketServer listenerType=BROKER, nodeId=0] Stopped socket server request processors (kafka.network.SocketServer) [2024-02-22 01:21:14,406] INFO [data-plane Kafka Request Handler on Broker 0], shutting down (kafka.server.KafkaRequestHandlerPool) [2024-02-22 01:21:14,408] INFO [data-plane Kafka Request Handler on Broker 0], shut down completely (kafka.server.KafkaRequestHandlerPool) [2024-02-22 01:21:14,408] INFO [ExpirationReaper-0-AlterAcls]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2024-02-22 01:21:14,409] INFO [ExpirationReaper-0-AlterAcls]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2024-02-22 01:21:14,409] INFO [KafkaApi-0] Shutdown complete. (kafka.server.KafkaApis) [2024-02-22 01:21:14,410] INFO [TransactionCoordinator id=0] Shutting down. (kafka.coordinator.transaction.TransactionCoordinator) [2024-02-22 01:21:14,410] INFO [Transaction State Manager 0]: Shutdown complete (kafka.coordinator.transaction.TransactionStateManager) [2024-02-22 01:21:14,410] INFO [TxnMarkerSenderThread-0]: Shutting down (kafka.coordinator.transaction.TransactionMarkerChannelManager) [2024-02-22 01:21:14,410] INFO [TxnMarkerSenderThread-0]: Shutdown completed (kafka.coordinator.transaction.TransactionMarkerChannelManager) [2024-02-22 01:21:14,411] INFO [TransactionCoordinator id=0] Shutdown complete. (kafka.coordinator.transaction.TransactionCoordinator) [2024-02-22 01:21:14,412] INFO [GroupCoordinator 0]: Shutting down. (kafka.coordinator.group.GroupCoordinator) [2024-02-22 01:21:14,412] INFO [ExpirationReaper-0-Heartbeat]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2024-02-22 01:21:14,413] INFO [ExpirationReaper-0-Heartbeat]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2024-02-22 01:21:14,413] INFO [ExpirationReaper-0-Rebalance]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2024-02-22 01:21:14,414] INFO [ExpirationReaper-0-Rebalance]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2024-02-22 01:21:14,414] INFO [GroupCoordinator 0]: Shutdown complete. (kafka.coordinator.group.GroupCoordinator) [2024-02-22 01:21:14,416] INFO [ReplicaManager broker=0] Shutting down (kafka.server.ReplicaManager) [2024-02-22 01:21:14,416] INFO [ReplicaFetcherManager on broker 0] shutting down (kafka.server.ReplicaFetcherManager) [2024-02-22 01:21:14,418] INFO [ReplicaFetcherManager on broker 0] shutdown completed (kafka.server.ReplicaFetcherManager) [2024-02-22 01:21:14,418] INFO [ReplicaAlterLogDirsManager on broker 0] shutting down (kafka.server.ReplicaAlterLogDirsManager) [2024-02-22 01:21:14,418] INFO [ReplicaAlterLogDirsManager on broker 0] shutdown completed (kafka.server.ReplicaAlterLogDirsManager) [2024-02-22 01:21:14,418] INFO [ExpirationReaper-0-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2024-02-22 01:21:14,418] INFO [ExpirationReaper-0-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2024-02-22 01:21:14,419] INFO [ExpirationReaper-0-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2024-02-22 01:21:14,419] INFO [ExpirationReaper-0-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2024-02-22 01:21:14,419] INFO [ExpirationReaper-0-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2024-02-22 01:21:14,420] INFO [ExpirationReaper-0-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2024-02-22 01:21:14,420] INFO [ExpirationReaper-0-ElectLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2024-02-22 01:21:14,421] INFO [ExpirationReaper-0-ElectLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2024-02-22 01:21:14,428] INFO [ReplicaManager broker=0] Shut down completely (kafka.server.ReplicaManager) [2024-02-22 01:21:14,428] INFO [broker-0-to-controller-alter-partition-channel-manager]: Shutting down (kafka.server.BrokerToControllerRequestThread)

[2024-02-22 01:21:14,429] INFO [broker-0-to-controller-alter-partition-channel-manager]: Shutdown completed (kafka.server.BrokerToControllerRequestThread) [2024-02-22 01:21:14,429] INFO Broker to controller channel manager for alter-partition shutdown (kafka.server.BrokerToControllerChannelManagerImpl) [2024-02-22 01:21:14,429] INFO [broker-0-to-controller-forwarding-channel-manager]: Shutting down (kafka.server.BrokerToControllerRequestThread)

[2024-02-22 01:21:14,429] INFO [broker-0-to-controller-forwarding-channel-manager]: Shutdown completed (kafka.server.BrokerToControllerRequestThread) [2024-02-22 01:21:14,429] INFO Broker to controller channel manager for forwarding shutdown (kafka.server.BrokerToControllerChannelManagerImpl) [2024-02-22 01:21:14,429] INFO Shutting down. (kafka.log.LogManager) [2024-02-22 01:21:14,440] WARN /bitnami/kafka/data/.kafka_cleanshutdown (kafka.utils.CoreUtils$) java.nio.file.FileAlreadyExistsException: /bitnami/kafka/data/.kafka_cleanshutdown at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:94) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218) at java.base/java.nio.file.Files.newByteChannel(Files.java:380) at java.base/java.nio.file.Files.createFile(Files.java:658) at kafka.log.LogManager.$anonfun$shutdown$16(LogManager.scala:632) at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:68) at kafka.log.LogManager.$anonfun$shutdown$10(LogManager.scala:632) at kafka.log.LogManager.$anonfun$shutdown$10$adapted(LogManager.scala:618) at kafka.utils.Implicits$MapExtensionMethods$.$anonfun$forKeyValue$1(Implicits.scala:62) at scala.collection.compat.MapExtensionMethods$.$anonfun$foreachEntry$1(PackageShared.scala:589) at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149) at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237) at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44) at scala.collection.mutable.HashMap.foreach(HashMap.scala:149) at scala.collection.compat.MapExtensionMethods$.foreachEntry$extension(PackageShared.scala:589) at kafka.log.LogManager.shutdown(LogManager.scala:618) at kafka.server.BrokerServer.$anonfun$shutdown$19(BrokerServer.scala:590) at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:68) at kafka.server.BrokerServer.shutdown(BrokerServer.scala:590) at kafka.server.BrokerServer.startup(BrokerServer.scala:505) at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:96) at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:96) at scala.Option.foreach(Option.scala:407) at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:96) at kafka.Kafka$.main(Kafka.scala:113) at kafka.Kafka.main(Kafka.scala) [2024-02-22 01:21:14,441] INFO Shutdown complete. (kafka.log.LogManager) [2024-02-22 01:21:14,442] INFO [broker-0-ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2024-02-22 01:21:14,443] INFO [broker-0-ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2024-02-22 01:21:14,443] INFO [broker-0-ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2024-02-22 01:21:14,443] INFO [broker-0-ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2024-02-22 01:21:14,443] INFO [broker-0-ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2024-02-22 01:21:14,443] INFO [broker-0-ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2024-02-22 01:21:14,443] INFO [broker-0-ThrottledChannelReaper-ControllerMutation]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2024-02-22 01:21:14,443] INFO [broker-0-ThrottledChannelReaper-ControllerMutation]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2024-02-22 01:21:14,444] INFO [SocketServer listenerType=BROKER, nodeId=0] Shutting down socket server (kafka.network.SocketServer) [2024-02-22 01:21:14,452] INFO [SocketServer listenerType=BROKER, nodeId=0] Shutdown completed (kafka.network.SocketServer) [2024-02-22 01:21:14,452] INFO Broker and topic stats closed (kafka.server.BrokerTopicStats) [2024-02-22 01:21:14,453] INFO [BrokerLifecycleManager id=0] closed event queue. (org.apache.kafka.queue.KafkaEventQueue) [2024-02-22 01:21:14,453] INFO [BrokerServer id=0] shut down completed (kafka.server.BrokerServer) [2024-02-22 01:21:14,453] INFO [BrokerServer id=0] Transition from SHUTTING_DOWN to SHUTDOWN (kafka.server.BrokerServer) [2024-02-22 01:21:14,453] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$) java.lang.RuntimeException: Received a fatal error while waiting for the controller to acknowledge that we are caught up at org.apache.kafka.server.util.FutureUtils.waitWithLogging(FutureUtils.java:68) at kafka.server.BrokerServer.startup(BrokerServer.scala:452) at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:96) at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:96) at scala.Option.foreach(Option.scala:407) at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:96) at kafka.Kafka$.main(Kafka.scala:113) at kafka.Kafka.main(Kafka.scala) Caused by: java.util.concurrent.CancellationException at java.base/java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2478) at kafka.server.BrokerLifecycleManager$ShutdownEvent.run(BrokerLifecycleManager.scala:506) at org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:186) at java.base/java.lang.Thread.run(Thread.java:833) [2024-02-22 01:21:14,454] INFO [ControllerServer id=0] shutting down (kafka.server.ControllerServer) [2024-02-22 01:21:14,454] INFO [raft-expiration-reaper]: Shutting down (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper) [2024-02-22 01:21:14,495] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)

[2024-02-22 01:21:14,510] INFO [raft-expiration-reaper]: Shutdown completed (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper) [2024-02-22 01:21:14,510] INFO [kafka-0-raft-io-thread]: Shutting down (kafka.raft.KafkaRaftManager$RaftIoThread) [2024-02-22 01:21:14,510] INFO [RaftManager id=0] Beginning graceful shutdown (org.apache.kafka.raft.KafkaRaftClient) [2024-02-22 01:21:14,512] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:14,512] WARN [RaftManager id=0] Connection to node 1 (im-kafka-controller-1.im-kafka-controller-headless.openim.svc.cluster.local/192.168.43.217:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:14,512] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:14,512] WARN [RaftManager id=0] Connection to node 2 (im-kafka-controller-2.im-kafka-controller-headless.openim.svc.cluster.local/192.168.104.79:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:14,595] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:14,695] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:14,795] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:14,896] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:14,996] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:15,053] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:15,054] WARN [RaftManager id=0] Connection to node 1 (im-kafka-controller-1.im-kafka-controller-headless.openim.svc.cluster.local/192.168.43.217:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:15,096] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:15,126] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:15,126] WARN [RaftManager id=0] Connection to node 2 (im-kafka-controller-2.im-kafka-controller-headless.openim.svc.cluster.local/192.168.104.79:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:15,196] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:15,297] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:15,397] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:15,497] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:15,568] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:15,568] WARN [RaftManager id=0] Connection to node 1 (im-kafka-controller-1.im-kafka-controller-headless.openim.svc.cluster.local/192.168.43.217:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:15,597] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:15,691] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:15,691] WARN [RaftManager id=0] Connection to node 2 (im-kafka-controller-2.im-kafka-controller-headless.openim.svc.cluster.local/192.168.104.79:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:15,697] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:15,798] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:15,898] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:15,998] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:16,021] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:16,022] WARN [RaftManager id=0] Connection to node 1 (im-kafka-controller-1.im-kafka-controller-headless.openim.svc.cluster.local/192.168.43.217:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:16,099] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:16,199] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:16,236] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:16,236] WARN [RaftManager id=0] Connection to node 2 (im-kafka-controller-2.im-kafka-controller-headless.openim.svc.cluster.local/192.168.104.79:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:16,299] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:16,400] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:16,462] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:16,462] WARN [RaftManager id=0] Connection to node 1 (im-kafka-controller-1.im-kafka-controller-headless.openim.svc.cluster.local/192.168.43.217:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:16,500] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:16,600] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:16,700] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:16,801] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:16,814] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:16,814] WARN [RaftManager id=0] Connection to node 2 (im-kafka-controller-2.im-kafka-controller-headless.openim.svc.cluster.local/192.168.104.79:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:16,901] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,001] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,016] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:17,016] WARN [RaftManager id=0] Connection to node 1 (im-kafka-controller-1.im-kafka-controller-headless.openim.svc.cluster.local/192.168.43.217:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:17,101] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,201] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,302] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,362] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:17,362] WARN [RaftManager id=0] Connection to node 2 (im-kafka-controller-2.im-kafka-controller-headless.openim.svc.cluster.local/192.168.104.79:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:17,402] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,502] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,602] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,617] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:17,617] WARN [RaftManager id=0] Connection to node 1 (im-kafka-controller-1.im-kafka-controller-headless.openim.svc.cluster.local/192.168.43.217:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:17,702] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,803] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,903] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:17,952] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:17,952] WARN [RaftManager id=0] Connection to node 2 (im-kafka-controller-2.im-kafka-controller-headless.openim.svc.cluster.local/192.168.104.79:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:18,003] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:18,103] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:18,107] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:18,107] WARN [RaftManager id=0] Connection to node 1 (im-kafka-controller-1.im-kafka-controller-headless.openim.svc.cluster.local/192.168.43.217:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:18,204] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:18,304] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:18,404] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:18,429] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:18,429] WARN [RaftManager id=0] Connection to node 2 (im-kafka-controller-2.im-kafka-controller-headless.openim.svc.cluster.local/192.168.104.79:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:18,504] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:18,589] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:18,589] WARN [RaftManager id=0] Connection to node 1 (im-kafka-controller-1.im-kafka-controller-headless.openim.svc.cluster.local/192.168.43.217:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:18,605] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:18,705] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:18,805] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:18,905] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:18,947] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:18,947] WARN [RaftManager id=0] Connection to node 2 (im-kafka-controller-2.im-kafka-controller-headless.openim.svc.cluster.local/192.168.104.79:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:19,005] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:19,084] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:19,084] WARN [RaftManager id=0] Connection to node 1 (im-kafka-controller-1.im-kafka-controller-headless.openim.svc.cluster.local/192.168.43.217:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:19,106] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:19,206] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:19,306] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:19,406] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:19,465] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:19,465] WARN [RaftManager id=0] Connection to node 2 (im-kafka-controller-2.im-kafka-controller-headless.openim.svc.cluster.local/192.168.104.79:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2024-02-22 01:21:19,506] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [2024-02-22 01:21:19,511] WARN [RaftManager id=0] Graceful shutdown timed out after 5000ms (org.apache.kafka.raft.KafkaRaftClient) [2024-02-22 01:21:19,512] ERROR [kafka-0-raft-io-thread]: Graceful shutdown of RaftClient failed (kafka.raft.KafkaRaftManager$RaftIoThread) java.util.concurrent.TimeoutException: Timeout expired before graceful shutdown completed at org.apache.kafka.raft.KafkaRaftClient$GracefulShutdown.failWithTimeout(KafkaRaftClient.java:2423) at org.apache.kafka.raft.KafkaRaftClient.maybeCompleteShutdown(KafkaRaftClient.java:2169) at org.apache.kafka.raft.KafkaRaftClient.poll(KafkaRaftClient.java:2234) at kafka.raft.KafkaRaftManager$RaftIoThread.doWork(RaftManager.scala:64) at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:127)

[2024-02-22 01:21:19,512] INFO [kafka-0-raft-io-thread]: Shutdown completed (kafka.raft.KafkaRaftManager$RaftIoThread) [2024-02-22 01:21:19,517] INFO [kafka-0-raft-outbound-request-thread]: Shutting down (kafka.raft.RaftSendThread)

[2024-02-22 01:21:19,518] INFO [kafka-0-raft-outbound-request-thread]: Shutdown completed (kafka.raft.RaftSendThread) [2024-02-22 01:21:19,524] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Stopping socket server request processors (kafka.network.SocketServer) [2024-02-22 01:21:19,528] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Stopped socket server request processors (kafka.network.SocketServer) [2024-02-22 01:21:19,528] INFO [QuorumController id=0] QuorumController#beginShutdown: shutting down event queue. (org.apache.kafka.queue.KafkaEventQueue) [2024-02-22 01:21:19,528] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Shutting down socket server (kafka.network.SocketServer) [2024-02-22 01:21:19,531] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Shutdown completed (kafka.network.SocketServer) [2024-02-22 01:21:19,531] INFO [data-plane Kafka Request Handler on Broker 0], shutting down (kafka.server.KafkaRequestHandlerPool) [2024-02-22 01:21:19,531] INFO [data-plane Kafka Request Handler on Broker 0], shut down completely (kafka.server.KafkaRequestHandlerPool) [2024-02-22 01:21:19,532] INFO [ExpirationReaper-0-AlterAcls]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2024-02-22 01:21:19,532] INFO [ExpirationReaper-0-AlterAcls]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2024-02-22 01:21:19,532] INFO [controller-0-ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2024-02-22 01:21:19,533] INFO [controller-0-ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2024-02-22 01:21:19,533] INFO [controller-0-ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2024-02-22 01:21:19,533] INFO [controller-0-ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2024-02-22 01:21:19,533] INFO [controller-0-ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2024-02-22 01:21:19,533] INFO [controller-0-ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2024-02-22 01:21:19,533] INFO [controller-0-ThrottledChannelReaper-ControllerMutation]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)

[2024-02-22 01:21:19,535] INFO [controller-0-ThrottledChannelReaper-ControllerMutation]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2024-02-22 01:21:19,535] INFO [QuorumController id=0] closed event queue. (org.apache.kafka.queue.KafkaEventQueue) [2024-02-22 01:21:19,537] INFO [SharedServer id=0] Stopping SharedServer (kafka.server.SharedServer) [2024-02-22 01:21:19,537] INFO [MetadataLoader id=0] beginShutdown: shutting down event queue. (org.apache.kafka.queue.KafkaEventQueue) [2024-02-22 01:21:19,538] INFO [SnapshotGenerator id=0] beginShutdown: shutting down event queue. (org.apache.kafka.queue.KafkaEventQueue) [2024-02-22 01:21:19,538] INFO [SnapshotGenerator id=0] closed event queue. (org.apache.kafka.queue.KafkaEventQueue) [2024-02-22 01:21:19,538] INFO [MetadataLoader id=0] closed event queue. (org.apache.kafka.queue.KafkaEventQueue) [2024-02-22 01:21:19,538] INFO [SnapshotGenerator id=0] closed event queue. (org.apache.kafka.queue.KafkaEventQueue) [2024-02-22 01:21:19,539] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics) [2024-02-22 01:21:19,539] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics) [2024-02-22 01:21:19,539] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics) [2024-02-22 01:21:19,541] INFO App info kafka.server for 0 unregistered (org.apache.kafka.common.utils.AppInfoParser) [2024-02-22 01:21:19,542] INFO App info kafka.server for 0 unregistered (org.apache.kafka.common.utils.AppInfoParser) [root@openim-master ~/helm-charts ]# cat infra/kafka-config.yaml

This configuration file is used to override the use of the value. yaml variable. You can modify it according to your own needs.

If you modify the account information inside, it may need to be synchronized in the application file

global:

this is your storageClass,Please change in according to the k8s environment settings of your server

storageClass: "nfs-client" imageRegistry: "m.daocloud.io/docker.io" controller: replicaCount: 3 sasl: client: users:

provisioning: enabled: true numPartitions: 1 replicationFactor: 1 topics:

metrics: kafka: enabled: false serviceMonitor: enabled: false labels: release: kube-prometheus-stack [root@openim-master ~/helm-charts ]# kubectl get pvc -n openim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-im-kafka-controller-0 Bound pvc-e7bcb554-fc1b-4b39-b431-27eac56b3e7b 1Gi RWO nfs-client 14h data-im-kafka-controller-1 Bound pvc-0e87fb42-7049-4887-8fe4-6f023ffec3cc 1Gi RWO nfs-client 14h data-im-kafka-controller-2 Bound pvc-1237ae8c-6acc-442e-8d47-611655adde52 1Gi RWO nfs-client 14h data-im-mysql-0 Bound pvc-26b3ff90-1f7d-4550-9056-4a1e32a6f569 1Gi RWO nfs-client 14h im-minio Bound pvc-ff79ab88-38a3-4776-8c02-ee1e8ebc7cd3 1Gi RWO nfs-client 14h im-mongodb Bound pvc-78ef01ce-ea6d-43a2-8fcb-28141bb2b9bb 1Gi RWO nfs-client 14h redis-data-im-redis-master-0 Bound pvc-ac8fa041-904c-4ad2-87c0-9701c08aa477 1Gi RWO nfs-client 14h redis-data-im-redis-replicas-0 Bound pvc-4a7d32a8-e2af-483f-bad2-8b15b0977b95 1Gi RWO nfs-client 14h [root@openim-master ~/helm-charts ]# kubectl get pod -n openim NAME READY STATUS RESTARTS AGE im-kafka-controller-0 0/1 CrashLoopBackOff 147 (2m45s ago) 14h im-kafka-controller-1 0/1 CrashLoopBackOff 178 (107s ago) 14h im-kafka-controller-2 0/1 CrashLoopBackOff 178 (3m1s ago) 14h im-kafka-provisioning-ppqzf 0/1 Init:CrashLoopBackOff 126 (3m36s ago) 14h im-minio-8b55b5c78-cdqld 1/1 Running 0 14h im-mongodb-6ddb5d7c48-bl9rt 1/1 Running 0 14h im-mysql-0 1/1 Running 0 14h im-redis-master-0 1/1 Running 0 14h im-redis-replicas-0 1/1 Running 1 (14h ago) 14h nfs-client-provisioner-75554d5f97-lbdmn 1/1 Running 1 (14h ago) 15h

[root@openim-master ~/helm-charts ]# kubectl logs im-kafka-provisioning-ppqzf -n openim Error from server (BadRequest): container "kafka-provisioning" in pod "im-kafka-provisioning-ppqzf" is waiting to start: PodInitializing [root@openim-master ~/helm-charts ]# kubectl describe pod im-kafka-provisioning-ppqzf -n openim Name: im-kafka-provisioning-ppqzf Namespace: openim Priority: 0 Node: livekit-node1/172.16.200.112 Start Time: Wed, 21 Feb 2024 18:46:18 +0800 Labels: app.kubernetes.io/component=kafka-provisioning app.kubernetes.io/instance=im-kafka app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=kafka app.kubernetes.io/version=3.5.1 controller-uid=777cda42-ff16-4bd8-a883-13b2eb5736e5 helm.sh/chart=kafka-25.1.11 job-name=im-kafka-provisioning Annotations: cni.projectcalico.org/containerID: 3d5adc9aff8739a63e2f64b23a8257d1195e59a6c4c9a12fb8070d1ce0e3d70b cni.projectcalico.org/podIP: 192.168.43.218/32 cni.projectcalico.org/podIPs: 192.168.43.218/32 seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Pending IP: 192.168.43.218 IPs: IP: 192.168.43.218 Controlled By: Job/im-kafka-provisioning Init Containers: wait-for-available-kafka: Container ID: docker://f17829eed3bb33d45672c5fdba410d181d24e6d4de8cdeec9ee59af7324fa0ba Image: m.daocloud.io/docker.io/bitnami/kafka:3.5.1-debian-11-r44 Image ID: docker-pullable://m.daocloud.io/docker.io/bitnami/kafka@sha256:a6de8c395d8fe2126f57178a95bbd8c825b88a103c5b3f9a2ed7774a6fd44f84 Port: Host Port: Command: /bin/bash Args: -ec wait-for-port \ --host=im-kafka \ --state=inuse \ --timeout=120 \ 9092; echo "Kafka is available";

State:          Running
  Started:      Thu, 22 Feb 2024 09:25:29 +0800
Last State:     Terminated
  Reason:       Error
  Exit Code:    1
  Started:      Thu, 22 Feb 2024 09:18:28 +0800
  Finished:     Thu, 22 Feb 2024 09:20:28 +0800
Ready:          False
Restart Count:  127
Environment:    <none>
Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rvtpl (ro)

Containers: kafka-provisioning: Container ID:
Image: m.daocloud.io/docker.io/bitnami/kafka:3.5.1-debian-11-r44 Image ID:
Port: Host Port: Command: /bin/bash Args: -ec echo "Configuring environment" . /opt/bitnami/scripts/libkafka.sh export CLIENT_CONF="${CLIENT_CONF:-/tmp/client.properties}" if [ ! -f "$CLIENT_CONF" ]; then touch $CLIENT_CONF

    kafka_common_conf_set "$CLIENT_CONF" security.protocol "SASL_PLAINTEXT"
    kafka_common_conf_set "$CLIENT_CONF" sasl.mechanism PLAIN
    kafka_common_conf_set "$CLIENT_CONF" sasl.jaas.config "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"$SASL_USERNAME\" password=\"$SASL_USER_PASSWORD\";"
  fi

  echo "Running pre-provisioning script if any given"

  kafka_provisioning_commands=(
    "/opt/bitnami/kafka/bin/kafka-topics.sh \
        --create \
        --if-not-exists \
        --bootstrap-server ${KAFKA_SERVICE} \
        --replication-factor 1 \
        --partitions 1 \
        --config flush.messages=1 \
        --config max.message.bytes=64000 \
        --command-config ${CLIENT_CONF} \
        --topic latestMsgToRedis"
    "/opt/bitnami/kafka/bin/kafka-topics.sh \
        --create \
        --if-not-exists \
        --bootstrap-server ${KAFKA_SERVICE} \
        --replication-factor 1 \
        --partitions 1 \
        --config flush.messages=1 \
        --config max.message.bytes=64000 \
        --command-config ${CLIENT_CONF} \
        --topic msgToPush"
    "/opt/bitnami/kafka/bin/kafka-topics.sh \
        --create \
        --if-not-exists \
        --bootstrap-server ${KAFKA_SERVICE} \
        --replication-factor 1 \
        --partitions 1 \
        --config flush.messages=1 \
        --config max.message.bytes=64000 \
        --command-config ${CLIENT_CONF} \
        --topic offlineMsgToMongoMysql"
  )

  echo "Starting provisioning"
  for ((index=0; index < ${#kafka_provisioning_commands[@]}; index+=1))
  do
    for j in $(seq ${index} $((${index}+1-1)))
    do
        ${kafka_provisioning_commands[j]} & # Async command
    done
    wait  # Wait the end of the jobs
  done

  echo "Running post-provisioning script if any given"

  echo "Provisioning succeeded"

State:          Waiting
  Reason:       PodInitializing
Ready:          False
Restart Count:  0
Environment:
  BITNAMI_DEBUG:       false
  KAFKA_SERVICE:       im-kafka:9092
  SASL_USERNAME:       root
  SASL_USER_PASSWORD:  <set to the key 'system-user-password' in secret 'im-kafka-user-passwords'>  Optional: false
Mounts:
  /tmp from tmp (rw)
  /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rvtpl (ro)

Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: tmp: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium:
SizeLimit: kube-api-access-rvtpl: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message


Normal Pulled 15m (x126 over 14h) kubelet Container image "m.daocloud.io/docker.io/bitnami/kafka:3.5.1-debian-11-r44" already present on machine Warning BackOff 5m2s (x2854 over 14h) kubelet Back-off restarting failed container