Closed codeninja55 closed 4 years ago
Hi,
Could you please create a small reproducer we can check easily?
This definitely sounds like something we need to fix for 1.4.0.Final
.
Thanks.
It seems to be running now if I use the gradle:6.3.0-jdk11
base image.
Example can be found here https://github.com/codeninja55/quarkus-producer-example
However, I seem to be unable to pass the correct hostname to for the broker. I have been getting this message consistently in the example and my current project.
=2020-04-18 09:16:46,540 INFO [io.sm.re.me.ex.MediatorManager] (main) => Deployment done... start processing
=2020-04-18 09:16:46,568 INFO [io.sm.re.me.im.ConfiguredChannelFactory] (main) => Found incoming connectors: [smallrye-kafka]
=2020-04-18 09:16:46,568 INFO [io.sm.re.me.im.ConfiguredChannelFactory] (main) => Found outgoing connectors: [smallrye-kafka]
=2020-04-18 09:16:46,570 INFO [io.sm.re.me.im.ConfiguredChannelFactory] (main) => Channel manager initializing...
=2020-04-18 09:16:46,581 INFO [io.sm.re.me.ka.im.KafkaSink] (main) => Setting bootstrap.servers to kafka:9092
=2020-04-18 09:16:46,599 INFO [or.ap.ka.cl.pr.ProducerConfig] (main) => ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [kafka:9092]
buffer.memory = 33554432
client.dns.lookup = default
client.id =
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.IntegerSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
=2020-04-18 09:16:46,730 INFO [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka version: 2.4.1
=2020-04-18 09:16:46,731 INFO [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka commitId: c57222ae8cd7866b
=2020-04-18 09:16:46,731 INFO [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka startTimeMs: 1587201406728
=2020-04-18 09:16:46,750 INFO [io.sm.re.me.ka.im.KafkaSink] (main) => Setting bootstrap.servers to kafka:9092
=2020-04-18 09:16:46,750 INFO [or.ap.ka.cl.pr.ProducerConfig] (main) => ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [kafka:9092]
buffer.memory = 33554432
client.dns.lookup = default
client.id =
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.IntegerSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
=2020-04-18 09:16:46,758 INFO [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka version: 2.4.1
=2020-04-18 09:16:46,759 INFO [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka commitId: c57222ae8cd7866b
=2020-04-18 09:16:46,760 INFO [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka startTimeMs: 1587201406758
=2020-04-18 09:16:46,763 INFO [io.sm.re.me.ex.MediatorManager] (main) => Initializing mediators
=2020-04-18 09:16:46,801 INFO [io.sm.re.me.ex.MediatorManager] (main) => Connecting mediators
=2020-04-18 09:16:46,803 INFO [io.sm.re.me.ex.MediatorManager] (main) => Connecting method com.example.kafka.streams.producer.ValuesGenerator#weatherStations to sink weather-stations
=2020-04-18 09:16:46,852 INFO [io.sm.re.me.ex.MediatorManager] (main) => Connecting method com.example.kafka.streams.producer.ValuesGenerator#generate to sink temperature-values
=2020-04-18 09:16:46,862 INFO [io.quarkus] (main) => producer 1.0.0-SNAPSHOT (powered by Quarkus 1.4.0.CR1) started in 0.849s.
=2020-04-18 09:16:46,862 INFO [io.quarkus] (main) => Profile prod activated.
=2020-04-18 09:16:46,863 INFO [io.quarkus] (main) => Installed features: [cdi, kotlin, mutiny, smallrye-context-propagation, smallrye-reactive-messaging, smallrye-reactive-messaging-kafka, smallrye-reactive-streams-operators, vertx]
=2020-04-18 09:16:46,954 INFO [or.ap.ka.cl.Metadata] (kafka-producer-network-thread | producer-1) => [Producer clientId=producer-1] Cluster ID: uLkJz4cuTQuELwhRtbguVA
=2020-04-18 09:16:46,954 INFO [or.ap.ka.cl.Metadata] (kafka-producer-network-thread | producer-2) => [Producer clientId=producer-2] Cluster ID: uLkJz4cuTQuELwhRtbguVA
=2020-04-18 09:16:46,967 WARN [or.ap.ka.cl.NetworkClient] (kafka-producer-network-thread | producer-1) => [Producer clientId=producer-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
@codeninja55 I am going to close this since the original reported problem not a Quarkus issue. As for your subsequent comment, please open another issue or ask on the Zulip chat if think the Kafka application isn't working properly.
So I'm having the same issue as post https://github.com/quarkusio/quarkus/issues/8655#issuecomment-615824292 whereby I've configured a single listener in kafka:
listeners=JAVA://kafka:9082
advertised.listeners=JAVA://kafka:9082
listener.security.protocol.map=JAVA:PLAINTEXT
advertised.host.name=kafka
host.name=kafka
Running kafka in a docker container:
docker run -v ${PWD}/server.properties:/config/server.properties \
--name=kafka -p 9082:9082 --network=jobs-net kafka:2.13-2.6.0
My Quarkus 1.7.0.Final app is using the io.quarkus.quarkus-kafka-streams
extension with the following property:
quarkus.kafka-streams.bootstrap-servers=${KAFKA_HOST:localhost}:${KAFKA_PORT:9092}
When I start up the application I'm noticing the logs unable to resolve the kafka host and it then falls back to localhost:9082
$ docker run -e KAFKA_HOST=kafka -e KAFKA_PORT=9082 \
-p 8030:8030 --network=jobs-net --name stream-data-api stream-data-api:1.5-SNAPSHOT
__ ____ __ _____ ___ __ ____ ______
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2020-08-27 20:04:52,771 INFO [io.und.web.jsr] (main) UT026003: Adding annotated server endpoint class com.brightfield.streams.DataInfoSocket for path /chat/stats/{username}
2020-08-27 20:04:53,181 INFO [org.apa.kaf.cli.adm.AdminClientConfig] (main) AdminClientConfig values:
bootstrap.servers = [kafka/<unresolved>:9082]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2020-08-27 20:04:53,332 WARN [org.apa.kaf.cli.adm.AdminClientConfig] (main) The configuration 'ssl.endpoint.identification.algorithm' was supplied but isn't a known config.
2020-08-27 20:04:53,334 INFO [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka version: 2.5.0
2020-08-27 20:04:53,334 INFO [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka commitId: 66563e712b0b9f84
2020-08-27 20:04:53,335 INFO [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka startTimeMs: 1598558693332
2020-08-27 20:04:53,364 WARN [org.apa.kaf.cli.NetworkClient] (kafka-admin-client-thread | adminclient-1) [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9082) could not be established. Broker may not be available.
2020-08-27 20:04:53,413 INFO [com.bri.str.DataInfoSocket] (main) Building the Topology...
2020-08-27 20:04:53,468 WARN [org.apa.kaf.cli.NetworkClient] (kafka-admin-client-thread | adminclient-1) [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9082) could not be established. Broker may not be available.
I'll admit I'm using OpenJDK14 for my runtime (I'll see if I can get remove my preview code and downgrade to Java 11) I just don't understand why it's trying to connect to node -1 (localhost/127.0.0.1:9082)
Downgrading to Java 11 fixed the problem 🤷🏻♂️
Describe the bug Trying to run a Kafka producer Docker container using Java 11 fails.
Previously was working with Quarkus
1.3.2.Final
and Java 8.Expected behavior (Describe the expected behavior clearly and concisely.)
Actual behavior Error log is:
To Reproduce Steps to reproduce the behavior:
docker-compose up -d --build producer
Configuration
Environment (please complete the following information):
java -version
: 11mvnw --version
orgradlew --version
): gradle 1.3Additional context
Dockerfile
build.gradle.kts
docker-compose.yml