quarkusio / quarkus

Quarkus: Supersonic Subatomic Java.
https://quarkus.io
Apache License 2.0
13.68k stars 2.65k forks source link

Unable to run with Java 11 in Docker #8655

Closed codeninja55 closed 4 years ago

codeninja55 commented 4 years ago

Describe the bug Trying to run a Kafka producer Docker container using Java 11 fails.

Previously was working with Quarkus 1.3.2.Final and Java 8.

Expected behavior (Describe the expected behavior clearly and concisely.)

Actual behavior Error log is:

exec java -Djava.util.logging.manager=org.jboss.logmanager.LogManager -XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:+ExitOnOutOfMemoryError -cp . -jar /deployments/app.jar
Exception in thread "main" java.lang.ExceptionInInitializerError
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
    at java.base/java.lang.Class.newInstance(Class.java:584)
    at io.quarkus.runtime.Quarkus.run(Quarkus.java:60)
    at io.quarkus.runtime.Quarkus.run(Quarkus.java:38)
    at io.quarkus.runtime.Quarkus.run(Quarkus.java:106)
    at io.quarkus.runner.GeneratedMain.main(GeneratedMain.zig:29)
Caused by: java.lang.RuntimeException: Failed to start quarkus
    at io.quarkus.runner.ApplicationImpl.<clinit>(ApplicationImpl.zig:351)
    ... 9 more
Caused by: java.lang.RuntimeException: Failed to initialize Arc
    at io.quarkus.arc.Arc.initialize(Arc.java:26)
    at io.quarkus.arc.runtime.ArcRecorder.getContainer(ArcRecorder.java:35)
    at io.quarkus.deployment.steps.ArcProcessor$generateResources19.deploy_0(ArcProcessor$generateResources19.zig:53)
    at io.quarkus.deployment.steps.ArcProcessor$generateResources19.deploy(ArcProcessor$generateResources19.zig:422)
    at io.quarkus.runner.ApplicationImpl.<clinit>(ApplicationImpl.zig:326)
    ... 9 more
Caused by: java.lang.RuntimeException: Error checking value of member method since on interface java.lang.Deprecated
    at javax.enterprise.util.AnnotationLiteral.invoke(AnnotationLiteral.java:291)
    at javax.enterprise.util.AnnotationLiteral.getMemberValue(AnnotationLiteral.java:276)
    at javax.enterprise.util.AnnotationLiteral.hashCode(AnnotationLiteral.java:246)
    at java.base/java.util.HashMap.hash(HashMap.java:339)
    at java.base/java.util.HashMap.put(HashMap.java:607)
    at java.base/java.util.HashSet.add(HashSet.java:220)
    at io.smallrye.reactive.messaging.extension.MediatorManager_Bean.<init>(MediatorManager_Bean.zig:332)
    at io.quarkus.arc.setup.Default_ComponentsProvider.addBeans1(Default_ComponentsProvider.zig:405)
    at io.quarkus.arc.setup.Default_ComponentsProvider.getComponents(Default_ComponentsProvider.zig:38)
    at io.quarkus.arc.impl.ArcContainerImpl.<init>(ArcContainerImpl.java:106)
    at io.quarkus.arc.Arc.initialize(Arc.java:20)
    ... 13 more
Caused by: java.lang.reflect.InvocationTargetException
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at javax.enterprise.util.AnnotationLiteral.invoke(AnnotationLiteral.java:288)
    ... 23 more
Caused by: java.lang.AbstractMethodError: Receiver class io.quarkus.arc.generatorDeprecated_Shared_AnnotationLiteral does not define or inherit an implementation of the resolved method 'abstract java.lang.String since()' of interface java.lang.Deprecated.
    ... 28 more

To Reproduce Steps to reproduce the behavior:

  1. docker-compose up -d --build producer

Configuration

quarkus:
  log:
    level: "INFO"
    console:
      enable: true
      level: "INFO"
      format: "%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{2.}] (%t) => %s%e%n"
      async:
        ~: true
        overflow: "block"
        queue-length: 512
    category:
      "org.example.kafka.streams.producer":
        level: "DEBUG"
      "*":
        use-parent-handlers: true

kafka:
  bootstrap:
    servers: ${QUARKUS_KAFKA_STREAMS_BOOTSTRAP_SERVERS:localhost:9092}

mp:
  messaging:
    outgoing:
      TOPIC-INPUT:
        connector: smallrye-kafka
        key:
          serializer: org.apache.kafka.common.serialization.StringSerializer
        value:
          serializer: org.apache.kafka.common.serialization.StringSerializer

Environment (please complete the following information):

Additional context

Dockerfile

##### STAGE 1 => BUILD
FROM gradle:6.3.0-jdk8 AS build
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
RUN gradle quarkusBuild --no-daemon

##### STAGE 2 => DEPLOY
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.1

ARG JAVA_PACKAGE=java-11-openjdk-headless
ARG RUN_JAVA_VERSION=1.3.5

ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'

# Install java and the run-java script
# Also set up permissions for user `1001`
RUN microdnf install curl ca-certificates ${JAVA_PACKAGE} \
    && microdnf update \
    && microdnf clean all \
    && mkdir /deployments \
    && chown 1001 /deployments \
    && chmod "g+rwX" /deployments \
    && chown 1001:root /deployments \
    && curl https://repo1.maven.org/maven2/io/fabric8/run-java-sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o /deployments/run-java.sh \
    && chown 1001 /deployments/run-java.sh \
    && chmod 540 /deployments/run-java.sh \
    && echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/lib/security/java.security

# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size.
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"

COPY --from=build /home/gradle/src/build/lib/* /deployments/lib/
COPY --from=build /home/gradle/src/build/*-runner.jar /deployments/app.jar

EXPOSE 8080
USER 1001

ENTRYPOINT [ "/deployments/run-java.sh" ]

build.gradle.kts

import org.jetbrains.kotlin.gradle.tasks.KotlinCompile

val quarkusPlatformVersion = "1.4.0.CR1"

plugins {
    id("io.quarkus") version "1.4.0.CR1"
    id("org.jetbrains.kotlin.jvm") version "1.3.71"
    id("org.jetbrains.kotlin.plugin.allopen") version "1.3.71"
}

group = "org.example"
version = "1.0.0-SNAPSHOT"

repositories {
    mavenCentral()
}

dependencies {
    implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8")
    implementation(enforcedPlatform("io.quarkus:quarkus-bom:${quarkusPlatformVersion}"))
    implementation("io.quarkus:quarkus-kotlin")
    implementation("io.quarkus:quarkus-config-yaml")
    implementation("io.quarkus:quarkus-smallrye-reactive-messaging-kafka")
    implementation("com.fasterxml.jackson.module:jackson-module-kotlin:2.10.3")
}

quarkus {
    setOutputDirectory("$projectDir/build/classes/kotlin/main")
}

allOpen {
    annotation("javax.ws.rs.Path")
    annotation("javax.enterprise.context.ApplicationScoped")
    annotation("io.quarkus.test.junit.QuarkusTest")
}

java {
    sourceCompatibility = JavaVersion.VERSION_11
    targetCompatibility = JavaVersion.VERSION_11
}

val compileKotlin: KotlinCompile by tasks
compileKotlin.kotlinOptions {
    jvmTarget = "11"
}

val compileTestKotlin: KotlinCompile by tasks
compileTestKotlin.kotlinOptions {
    jvmTarget = "11"
}

docker-compose.yml

producer:
    build:
      context: producer
      dockerfile: src/main/docker/Dockerfile.jvm
    image: org.example/producer-producer:latest
    container_name: producer
    ports:
      - 8082:8082
    depends_on:
      - kafka
    environment:
      QUARKUS_KAFKA_STREAMS_BOOTSTRAP_SERVERS: broker:29092
geoand commented 4 years ago

Hi,

Could you please create a small reproducer we can check easily? This definitely sounds like something we need to fix for 1.4.0.Final.

Thanks.

codeninja55 commented 4 years ago

It seems to be running now if I use the gradle:6.3.0-jdk11 base image.

Example can be found here https://github.com/codeninja55/quarkus-producer-example

However, I seem to be unable to pass the correct hostname to for the broker. I have been getting this message consistently in the example and my current project.

=2020-04-18 09:16:46,540 INFO  [io.sm.re.me.ex.MediatorManager] (main) => Deployment done... start processing
=2020-04-18 09:16:46,568 INFO  [io.sm.re.me.im.ConfiguredChannelFactory] (main) => Found incoming connectors: [smallrye-kafka]
=2020-04-18 09:16:46,568 INFO  [io.sm.re.me.im.ConfiguredChannelFactory] (main) => Found outgoing connectors: [smallrye-kafka]
=2020-04-18 09:16:46,570 INFO  [io.sm.re.me.im.ConfiguredChannelFactory] (main) => Channel manager initializing...
=2020-04-18 09:16:46,581 INFO  [io.sm.re.me.ka.im.KafkaSink] (main) => Setting bootstrap.servers to kafka:9092
=2020-04-18 09:16:46,599 INFO  [or.ap.ka.cl.pr.ProducerConfig] (main) => ProducerConfig values: 
    acks = 1
    batch.size = 16384
    bootstrap.servers = [kafka:9092]
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.IntegerSerializer
    linger.ms = 0
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

=2020-04-18 09:16:46,730 INFO  [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka version: 2.4.1
=2020-04-18 09:16:46,731 INFO  [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka commitId: c57222ae8cd7866b
=2020-04-18 09:16:46,731 INFO  [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka startTimeMs: 1587201406728
=2020-04-18 09:16:46,750 INFO  [io.sm.re.me.ka.im.KafkaSink] (main) => Setting bootstrap.servers to kafka:9092
=2020-04-18 09:16:46,750 INFO  [or.ap.ka.cl.pr.ProducerConfig] (main) => ProducerConfig values: 
    acks = 1
    batch.size = 16384
    bootstrap.servers = [kafka:9092]
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.IntegerSerializer
    linger.ms = 0
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

=2020-04-18 09:16:46,758 INFO  [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka version: 2.4.1
=2020-04-18 09:16:46,759 INFO  [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka commitId: c57222ae8cd7866b
=2020-04-18 09:16:46,760 INFO  [or.ap.ka.co.ut.AppInfoParser] (main) => Kafka startTimeMs: 1587201406758
=2020-04-18 09:16:46,763 INFO  [io.sm.re.me.ex.MediatorManager] (main) => Initializing mediators
=2020-04-18 09:16:46,801 INFO  [io.sm.re.me.ex.MediatorManager] (main) => Connecting mediators
=2020-04-18 09:16:46,803 INFO  [io.sm.re.me.ex.MediatorManager] (main) => Connecting method com.example.kafka.streams.producer.ValuesGenerator#weatherStations to sink weather-stations
=2020-04-18 09:16:46,852 INFO  [io.sm.re.me.ex.MediatorManager] (main) => Connecting method com.example.kafka.streams.producer.ValuesGenerator#generate to sink temperature-values
=2020-04-18 09:16:46,862 INFO  [io.quarkus] (main) => producer 1.0.0-SNAPSHOT (powered by Quarkus 1.4.0.CR1) started in 0.849s. 
=2020-04-18 09:16:46,862 INFO  [io.quarkus] (main) => Profile prod activated. 
=2020-04-18 09:16:46,863 INFO  [io.quarkus] (main) => Installed features: [cdi, kotlin, mutiny, smallrye-context-propagation, smallrye-reactive-messaging, smallrye-reactive-messaging-kafka, smallrye-reactive-streams-operators, vertx]
=2020-04-18 09:16:46,954 INFO  [or.ap.ka.cl.Metadata] (kafka-producer-network-thread | producer-1) => [Producer clientId=producer-1] Cluster ID: uLkJz4cuTQuELwhRtbguVA
=2020-04-18 09:16:46,954 INFO  [or.ap.ka.cl.Metadata] (kafka-producer-network-thread | producer-2) => [Producer clientId=producer-2] Cluster ID: uLkJz4cuTQuELwhRtbguVA
=2020-04-18 09:16:46,967 WARN  [or.ap.ka.cl.NetworkClient] (kafka-producer-network-thread | producer-1) => [Producer clientId=producer-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
geoand commented 4 years ago

@codeninja55 I am going to close this since the original reported problem not a Quarkus issue. As for your subsequent comment, please open another issue or ask on the Zulip chat if think the Kafka application isn't working properly.

anthonyikeda commented 4 years ago

So I'm having the same issue as post https://github.com/quarkusio/quarkus/issues/8655#issuecomment-615824292 whereby I've configured a single listener in kafka:

listeners=JAVA://kafka:9082
advertised.listeners=JAVA://kafka:9082
listener.security.protocol.map=JAVA:PLAINTEXT

advertised.host.name=kafka
host.name=kafka

Running kafka in a docker container:

docker run -v ${PWD}/server.properties:/config/server.properties \
    --name=kafka -p 9082:9082 --network=jobs-net kafka:2.13-2.6.0

My Quarkus 1.7.0.Final app is using the io.quarkus.quarkus-kafka-streams extension with the following property:

quarkus.kafka-streams.bootstrap-servers=${KAFKA_HOST:localhost}:${KAFKA_PORT:9092}

When I start up the application I'm noticing the logs unable to resolve the kafka host and it then falls back to localhost:9082

$ docker run -e KAFKA_HOST=kafka -e KAFKA_PORT=9082 \
    -p 8030:8030 --network=jobs-net --name stream-data-api stream-data-api:1.5-SNAPSHOT
__  ____  __  _____   ___  __ ____  ______
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2020-08-27 20:04:52,771 INFO  [io.und.web.jsr] (main) UT026003: Adding annotated server endpoint class com.brightfield.streams.DataInfoSocket for path /chat/stats/{username}
2020-08-27 20:04:53,181 INFO  [org.apa.kaf.cli.adm.AdminClientConfig] (main) AdminClientConfig values:
    bootstrap.servers = [kafka/<unresolved>:9082]
    client.dns.lookup = default
    client.id =
    connections.max.idle.ms = 300000
    default.api.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.2
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS

2020-08-27 20:04:53,332 WARN  [org.apa.kaf.cli.adm.AdminClientConfig] (main) The configuration 'ssl.endpoint.identification.algorithm' was supplied but isn't a known config.
2020-08-27 20:04:53,334 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka version: 2.5.0
2020-08-27 20:04:53,334 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka commitId: 66563e712b0b9f84
2020-08-27 20:04:53,335 INFO  [org.apa.kaf.com.uti.AppInfoParser] (main) Kafka startTimeMs: 1598558693332
2020-08-27 20:04:53,364 WARN  [org.apa.kaf.cli.NetworkClient] (kafka-admin-client-thread | adminclient-1) [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9082) could not be established. Broker may not be available.
2020-08-27 20:04:53,413 INFO  [com.bri.str.DataInfoSocket] (main) Building the Topology...
2020-08-27 20:04:53,468 WARN  [org.apa.kaf.cli.NetworkClient] (kafka-admin-client-thread | adminclient-1) [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9082) could not be established. Broker may not be available.

I'll admit I'm using OpenJDK14 for my runtime (I'll see if I can get remove my preview code and downgrade to Java 11) I just don't understand why it's trying to connect to node -1 (localhost/127.0.0.1:9082)

anthonyikeda commented 4 years ago

Downgrading to Java 11 fixed the problem 🤷🏻‍♂️