Open strigona-worksight opened 4 years ago
Is there any know work around here? We have just hit this and strimzi is not an option for us.
Update: The issue appears to be associated with cub's inability to load up the config providers jar file. The work around for us was to append to CUB_CLASSPATH
the path to the org.ggt.kafka.config.provider.KafkaEnvConfigProvider
jar file. e.g. CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar:/usr/local/share/java/kafka-connect/plugins/kafka-env-config-provider/kafka-env-config-provider-0.0.1.jar
Thanks I'll give that a shot! I had poked around a bit, but settled on overriding the entrypoint script to omit running the ensure
script, bypassing cub kafka-ready
entirely.
Did you set the CUB_CLASSPATH
at the docker/kubernetes level?
I am not familiar enough with the project nor the composition of the docker container to manipulate the start up scripts but setting the CUB_CLASSPATH
environment variable upon firing up docker i.e. docker run .... -e CUB_CLASSPATH=...
should resolve the stack trace from above.
I seem to also have this issue with a different config provider (AWSSecretProvider from lenses.io). In my case, the cub classpath does not impact the results, but changing CONNECT_SECURITY_PROTOCOL from SASL_SSL to PLAINTEXT does resolve it (even without touching any classpaths). I shouldn't keep it on plaintext for production use, so I still hope to see a deeper fix for this. I hope this helps to further isolate the problem for fixes.
Since image version 6.0.0, the CUB_CLASSPATH environment variable has been updated to "/usr/share/java/cp-base-new/*". You can add your config provider jars into this directory and it will be picked up at startup:
ADD kafka-env-config-provider-0.0.1.jar /usr/share/java/cp-base-new/
. This way you don't have to update the CUB_CLASSPATH environment variable in your worker configuration.
I fixed this issue for the lenses.io secrets plugin by mounting a modified version of the ensure
script that simply sed's out the config.providers.env.class
line from properties file, and then re-adds it after the Kafka Ready check (just in case it's actually needed there).
An actual fix would be appreciated!
Since image version 6.0.0, the CUB_CLASSPATH environment variable has been updated to "/usr/share/java/cp-base-new/*". You can add your config provider jars into this directory and it will be picked up at startup:
ADD kafka-env-config-provider-0.0.1.jar /usr/share/java/cp-base-new/
. This way you don't have to update the CUB_CLASSPATH environment variable in your worker configuration.
@jelledv I tried that with a 7.x image but then I get
===> Check if Kafka is healthy ...
Using log4j config /etc/cp-base-new/log4j.properties
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/connect/errors/ConnectException
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.utils.Utils.loadClass(Utils.java:419)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:408)
at org.apache.kafka.common.config.AbstractConfig.instantiateConfigProviders(AbstractConfig.java:577)
at org.apache.kafka.common.config.AbstractConfig.resolveConfigVariables(AbstractConfig.java:521)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:112)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:146)
at org.apache.kafka.clients.admin.AdminClientConfig.<init>(AdminClientConfig.java:235)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:144)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:136)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:149)
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.connect.errors.ConnectException
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 13 more
Any ideas why this could happen? Does this need any other configuration?
When a
config.providers
is specified and connecting with asecurity.protocol
other thanPLAINTEXT
thencub kafka-ready
fails.When using non-PLAINTEXT, the generated kafka-connect.properties which contains the relevant
config.providers
properties. The following error is thrown:Relevant code in the
ensure
file: https://github.com/confluentinc/cp-docker-images/blob/5.4-preview/debian/kafka-connect-base/include/etc/confluent/docker/ensure#L24-L28The config provider used works just fine with the strimzi images and works with PLAINTEXT.