confluentinc / kafka-images

Confluent Docker images for Apache Kafka
Apache License 2.0
19 stars 136 forks source link

cp-server arm64 fails when starts as single node on kubernetes #175

Open denyskril opened 2 years ago

denyskril commented 2 years ago

Hi all,

when running kafka in kubernetes in single mode, kafka can't resolve kafka-0.kafka.confluent.svc But, there is no such problem with amd64

Version: 7.2.1

Error:

[2022-08-05 12:14:48,024] INFO ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [kafka-0.kafka.confluent.svc:9093] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = confluent-telemetry-reporter-local-producer compression.type = lz4 connections.max.idle.ms = 600000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 500 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 10485760 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 500 sasl.client.callback.handler.class = sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = security.protocol = SSL security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = /etc/kafka/secrets/keystore.jks ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = /etc/kafka/secrets/truststore.jks ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class io.confluent.telemetry.serde.OpencensusMetricsProto (org.apache.kafka.clients.producer.ProducerConfig) [2022-08-05 12:14:48,048] WARN Couldn't resolve server kafka-0.kafka:9093 from bootstrap.servers as DNS resolution failed for kafka-0.kafka.confluent.svc (org.apache.kafka.clients.ClientUtils) [2022-08-05 12:14:48,049] INFO [Producer clientId=confluent-telemetry-reporter-local-producer] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2022-08-05 12:14:48,052] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics) [2022-08-05 12:14:48,053] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics) [2022-08-05 12:14:48,054] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics) [2022-08-05 12:14:48,055] INFO App info kafka.producer for confluent-telemetry-reporter-local-producer unregistered (org.apache.kafka.common.utils.AppInfoParser) [2022-08-05 12:14:48,055] INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer) [2022-08-05 12:14:48,074] ERROR [BrokerServer id=0] Fatal error during broker startup. Prepare to shutdown (kafka.server.BrokerServer) org.apache.kafka.common.KafkaException: Failed to construct kafka producer at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:439) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:289) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:316) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:301) at io.confluent.telemetry.exporter.kafka.KafkaExporter.(KafkaExporter.java:99) at io.confluent.telemetry.exporter.kafka.KafkaExporter$Builder.build(KafkaExporter.java:327) at io.confluent.telemetry.reporter.TelemetryReporter.initExporters(TelemetryReporter.java:199) at io.confluent.telemetry.reporter.TelemetryReporter.initExporters(TelemetryReporter.java:183) at io.confluent.telemetry.reporter.TelemetryReporter.startMetricCollectorTask(TelemetryReporter.java:453) at io.confluent.telemetry.reporter.TelemetryReporter.contextChange(TelemetryReporter.java:343) at kafka.server.KafkaBroker$.$anonfun$notifyMetricsReporters$1(KafkaBroker.scala:89) at kafka.server.KafkaBroker$.$anonfun$notifyMetricsReporters$1$adapted(KafkaBroker.scala:88) at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563) at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561) at scala.collection.AbstractIterable.foreach(Iterable.scala:926) at kafka.server.KafkaBroker$.notifyMetricsReporters(KafkaBroker.scala:88) at kafka.server.DynamicMetricsReporters.createReporters(DynamicBrokerConfig.scala:987) at kafka.server.DynamicMetricsReporters.(DynamicBrokerConfig.scala:930) at kafka.server.DynamicBrokerConfig.addReconfigurables(DynamicBrokerConfig.scala:327) at kafka.server.BrokerServer.startup(BrokerServer.scala:589) at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:146) at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:146) at scala.Option.foreach(Option.scala:437) at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:146) at kafka.Kafka$.main(Kafka.scala:108) at kafka.Kafka.main(Kafka.scala) Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:104) at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:63) at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:413) ... 25 more

OneCricketeer commented 1 year ago

Couldn't resolve server kafka-0.kafka:9093

It's correctly resolving kafka-0.kafka.confluent.svc

It's not resolving the other, which seems like something you've set as the advertised listeners.

Are you using Helm or Operators?