ImFlog / schema-registry-plugin

Gradle plugin to interact with Confluent Schema-Registry.
Apache License 2.0
111 stars 30 forks source link

Schema Register fails with error Unexpected character ('<' (code 60) #21

Closed AmalVR closed 5 years ago

AmalVR commented 5 years ago

Hi Florian,

I am trying to use "com.github.imflog.kafka-schema-registry-gradle-plugin", for pushing my schemas to my local schema registry, and I am getting "502" with the below error at the client side.

I suspect this to be some encoding issue during serialization, though I am not sure about this.

If you had encountered a similar issue, or if you are aware of this problem, kindly help me.

io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (number, String,
array, object, 'true', 'false' or 'null')
 at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005
        at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:170)
        at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:188)
        at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:245)
        at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:237)
        at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:232)
        at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:59)
        at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:91)
        at com.github.imflog.schema.registry.register.RegisterTaskAction.registerSchema(RegisterTaskAction.kt:35)
        at com.github.imflog.schema.registry.register.RegisterTaskAction.run(RegisterTaskAction.kt:21)
        at com.github.imflog.schema.registry.register.RegisterSchemasTask.registerSchemas(RegisterTask.kt:29)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73)
        at org.gradle.api.internal.project.taskfactory.StandardTaskAction.doExecute(StandardTaskAction.java:48)
        at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:41)
        at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:28)
        at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:704)
        at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:671)
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.run(ExecuteActionsTaskExecuter.java:284)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:301)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:293)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:175)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91)
        at org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31)
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:273)
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:258)
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.access$200(ExecuteActionsTaskExecuter.java:67)
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$TaskExecution.execute(ExecuteActionsTaskExecuter.java:145)
        at org.gradle.internal.execution.impl.steps.ExecuteStep.execute(ExecuteStep.java:49)
        at org.gradle.internal.execution.impl.steps.CancelExecutionStep.execute(CancelExecutionStep.java:34)
        at org.gradle.internal.execution.impl.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:69)
        at org.gradle.internal.execution.impl.steps.TimeoutStep.execute(TimeoutStep.java:49)
        at org.gradle.internal.execution.impl.steps.CatchExceptionStep.execute(CatchExceptionStep.java:33)
        at org.gradle.internal.execution.impl.steps.CreateOutputsStep.execute(CreateOutputsStep.java:50)
        at org.gradle.internal.execution.impl.steps.SnapshotOutputStep.execute(SnapshotOutputStep.java:43)
        at org.gradle.internal.execution.impl.steps.SnapshotOutputStep.execute(SnapshotOutputStep.java:29)
        at org.gradle.internal.execution.impl.steps.CacheStep.executeWithoutCache(CacheStep.java:134)
        at org.gradle.internal.execution.impl.steps.CacheStep.lambda$execute$3(CacheStep.java:83)
        at java.util.Optional.orElseGet(Optional.java:267)
        at org.gradle.internal.execution.impl.steps.CacheStep.execute(CacheStep.java:82)
        at org.gradle.internal.execution.impl.steps.CacheStep.execute(CacheStep.java:36)
        at org.gradle.internal.execution.impl.steps.PrepareCachingStep.execute(PrepareCachingStep.java:33)
        at org.gradle.internal.execution.impl.steps.StoreSnapshotsStep.execute(StoreSnapshotsStep.java:38)
        at org.gradle.internal.execution.impl.steps.StoreSnapshotsStep.execute(StoreSnapshotsStep.java:23)
        at org.gradle.internal.execution.impl.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:96)
        at org.gradle.internal.execution.impl.steps.SkipUpToDateStep.lambda$execute$0(SkipUpToDateStep.java:89)
        at java.util.Optional.map(Optional.java:215)
        at org.gradle.internal.execution.impl.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:52)
        at org.gradle.internal.execution.impl.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:36)
        at org.gradle.internal.execution.impl.DefaultWorkExecutor.execute(DefaultWorkExecutor.java:34)
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:91)
        at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:91)
        at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:57)
        at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:119)
        at org.gradle.api.internal.tasks.execution.ResolvePreviousStateExecuter.execute(ResolvePreviousStateExecuter.java:43)
        at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:93)
        at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:45)
        at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:94)
        at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:56)
        at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:55)
        at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
        at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:67)
        at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
        at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:315)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:305)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:175)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:101)
        at org.gradle.internal.operations.DelegatingBuildOperationExecutor.call(DelegatingBuildOperationExecutor.java:36)
        at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:49)
        at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:43)
        at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355)
        at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343)
        at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:336)
        at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:322)
        at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:134)
        at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:129)
        at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:202)
        at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:193)
        at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:129)
        at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
        at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
        at java.lang.Thread.run(Thread.java:748)

Environment:

Sample configuration:

buildscript {
    repositories {

        maven {
            url "http://packages.confluent.io/maven/"
        }
        mavenCentral()
    }
    dependencies {
        classpath("org.jfrog.buildinfo:build-info-extractor-gradle:4+")
    }
}

plugins {
    id "java-library"
    id "maven"
    id "maven-publish"
    id "com.commercehub.gradle.plugin.avro" version "0.9.1"
    id "com.github.imflog.kafka-schema-registry-gradle-plugin" version "0.5.0"
}

ext {
    build_version = System.getenv("VERSION_NUMBER") as String ?: "1.0.0"
}

group "com.sample.test"
version "${build_version}"

repositories {
    jcenter()
    mavenLocal()
    maven {
        url "http://packages.confluent.io/maven/"
    }
    mavenCentral()
}

dependencies {
    api "org.apache.avro:avro:1.8.2"
}

schemaRegistry {
    url = 'http://localhost:8081/'
    register {
          subject('mytopic', 'src/main/avro/mysample.avsc')
    }
}

Avro schema used

{
  "namespace": "com.mysample.schema",
  "name": "mysample",
  "type": "record",
  "fields": [
    {"name": "name", "type": "string"},
    { "name": "id", "type": "string" }
  ]
}
ImFlog commented 5 years ago

Hi, Thank you for the interest in this plugin. It may be related to this issue : https://github.com/ImFlog/schema-registry-plugin/issues/15 Right now (but will soon change) we are on version : implementation("io.confluent", "kafka-avro-serializer", "3.2.1")

Try to override the version to see if it works better.

AmalVR commented 5 years ago

Hi, The project I have created only contains Avro schemas, as you can see in the gradle configuration I haven't included any dependencies for avro serialization and I expect the plugin is working with its own dependencies and hence as in the case of #15 we cannot say it is related to version compatibility. But it is obvious there is some issue happening during serialization. Anyway, I tried adding the dependency for avro serializer with version "3.2.1", but I am getting the same error.

ImFlog commented 5 years ago

That's strange, I just tried on my machine with your configuration and It worked. You may have a bad character in your schema file (that's what the error seems to tell). Can you verify you use UTF-8 and that there is no hidden char in it ? If you have Windows Linux Subsystem you can do a cat -v mysample.avsc to spot hidden chars.

If this don't work, can you tell me how you launch the schema-registry locally ?

AmalVR commented 5 years ago

I verified the file, we are using UTF-8 and I couldnt see any hidden character using cat -v

We are using K8 in Windows 10 with the below YAML to start the schema-registry.

apiVersion: v1
kind: List
items:

- apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: schema-registry
    labels:
      my.app: schema-registry
  spec:
    replicas: 1
    selector:
      matchLabels:
        my.app: schema-registry
    strategy:
      type: Recreate
    template:
      metadata:
        labels:
          my.app: schema-registry
      spec:
        containers:
        - name: schema-registry
          image: confluentinc/cp-schema-registry:5.0.0
          env:
          - name: SCHEMA_REGISTRY_HOST_NAME
            value: schema-registry-service
          - name: SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL
            value: zookeeper-service:2181
          - name: SCHEMA_REGISTRY_LISTENERS
            value: http://0.0.0.0:8081
          - name: SCHEMA_REGISTRY_DEBUG
            value: "true"          
          ports:
          - containerPort: 8081
          resources:
            requests:
              memory: "1Gi"
              cpu: "250m"
            limits:
              memory: "2Gi"
              cpu: "1"
        restartPolicy: Always

- apiVersion: v1
  kind: Service
  metadata:
    name: schema-registry-service
    labels:
      my.app: schema-registry
  spec:
    selector:
       my.app: schema-registry
    ports:
    - protocol: TCP
      port: 8081
ImFlog commented 5 years ago

Searching a bit on confluent registry, I found this : https://github.com/confluentinc/schema-registry/issues/733. Made me realize that you may have an error on server side but not correctly output on the client side.

Can you check the logs of your application running the schema-registry @AmalVR ? Do not hesitate to enable debug to have more data.

ImFlog commented 5 years ago

@AmalVR Did you manage to fetch logs on the container side ?

AmalVR commented 5 years ago

Yes I do, please find below . /etc/confluent/docker/mesos-setup.sh

#!/usr/bin/env bash

set +o nounset
++ set +o nounset

if [ -z $SKIP_MESOS_AUTO_SETUP ]; then
    if [ -n $MESOS_SANDBOX ] && [ -e $MESOS_SANDBOX/.ssl/scheduler.crt ] && [ -e $MESOS_SANDBOX/.ssl/scheduler.key ]; then
        echo "Entering Mesos auto setup for Java SSL truststore. You should not see this if you are not on mesos ..."

        openssl pkcs12 -export -in $MESOS_SANDBOX/.ssl/scheduler.crt -inkey $MESOS_SANDBOX/.ssl/scheduler.key \
                       -out /tmp/keypair.p12 -name keypair \
                       -CAfile $MESOS_SANDBOX/.ssl/ca-bundle.crt -caname root -passout pass:export

        keytool -importkeystore \
                -deststorepass changeit -destkeypass changeit -destkeystore /tmp/kafka-keystore.jks \
                -srckeystore /tmp/keypair.p12 -srcstoretype PKCS12 -srcstorepass export \
                -alias keypair

        keytool -import \
                -trustcacerts \
                -alias root \
                -file $MESOS_SANDBOX/.ssl/ca-bundle.crt \
                -storepass changeit \
                -keystore /tmp/kafka-truststore.jks -noprompt
    fi
fi
++ '[' -z ']'
++ '[' -n ']'
++ '[' -e /.ssl/scheduler.crt ']'

set -o nounset
++ set -o nounset

. /etc/confluent/docker/apply-mesos-overrides

#!/usr/bin/env bash
#
# Copyright 2016 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Mesos DC/OS docker deployments will have HOST and PORT0 
# set for the proxying of the service.
# 
# Use those values provide things we know we'll need.

[ -n "${HOST:-}" ] && [ -z "${SCHEMA_REGISTRY_HOST_NAME:-}" ] && \
    export SCHEMA_REGISTRY_HOST_NAME=$HOST || true # we don't want the setup to fail if not on Mesos
++ '[' -n '' ']'
++ true

echo "===> ENV Variables ..."
+ echo '===> ENV Variables ...'
env | sort
===> ENV Variables ...
+ env
+ sort
ALLOW_UNSIGNED=false
COMPONENT=schema-registry
CONFLUENT_DEB_VERSION=1
CONFLUENT_MAJOR_VERSION=5
CONFLUENT_MINOR_VERSION=0
CONFLUENT_MVN_LABEL=
CONFLUENT_PATCH_VERSION=0
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_VERSION=5.0.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=schema-registry-58b954b788-jpv7t
KAFKA_LOAD_BALANCER_PORT=tcp://10.111.4.150:9092
KAFKA_LOAD_BALANCER_PORT_9092_TCP=tcp://10.111.4.150:9092
KAFKA_LOAD_BALANCER_PORT_9092_TCP_ADDR=10.111.4.150
KAFKA_LOAD_BALANCER_PORT_9092_TCP_PORT=9092
KAFKA_LOAD_BALANCER_PORT_9092_TCP_PROTO=tcp
KAFKA_LOAD_BALANCER_SERVICE_HOST=10.111.4.150
KAFKA_LOAD_BALANCER_SERVICE_PORT=9092
KAFKA_SERVICE_PORT=tcp://10.111.51.81:9092
KAFKA_SERVICE_PORT_9092_TCP=tcp://10.111.51.81:9092
KAFKA_SERVICE_PORT_9092_TCP_ADDR=10.111.51.81
KAFKA_SERVICE_PORT_9092_TCP_PORT=9092
KAFKA_SERVICE_PORT_9092_TCP_PROTO=tcp
KAFKA_SERVICE_SERVICE_HOST=10.111.51.81
KAFKA_SERVICE_SERVICE_PORT=9092
KAFKA_VERSION=2.0.0
KSQL_SERVER_LOAD_BALANCER_PORT=tcp://10.99.109.91:8088
KSQL_SERVER_LOAD_BALANCER_PORT_8088_TCP=tcp://10.99.109.91:8088
KSQL_SERVER_LOAD_BALANCER_PORT_8088_TCP_ADDR=10.99.109.91
KSQL_SERVER_LOAD_BALANCER_PORT_8088_TCP_PORT=8088
KSQL_SERVER_LOAD_BALANCER_PORT_8088_TCP_PROTO=tcp
KSQL_SERVER_LOAD_BALANCER_SERVICE_HOST=10.99.109.91
KSQL_SERVER_LOAD_BALANCER_SERVICE_PORT=8088
KSQL_SERVICE_PORT=tcp://10.107.81.64:8088
KSQL_SERVICE_PORT_8088_TCP=tcp://10.107.81.64:8088
KSQL_SERVICE_PORT_8088_TCP_ADDR=10.107.81.64
KSQL_SERVICE_PORT_8088_TCP_PORT=8088
KSQL_SERVICE_PORT_8088_TCP_PROTO=tcp
KSQL_SERVICE_SERVICE_HOST=10.107.81.64
KSQL_SERVICE_SERVICE_PORT=8088
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.11
SCHEMA_REGISTRY_DEBUG=true
SCHEMA_REGISTRY_HOST_NAME=schema-registry-service
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper-service:2181
SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081
SCHEMA_REGISTRY_LOAD_BALANCER_PORT=tcp://10.98.24.41:8081
SCHEMA_REGISTRY_LOAD_BALANCER_PORT_8081_TCP=tcp://10.98.24.41:8081
SCHEMA_REGISTRY_LOAD_BALANCER_PORT_8081_TCP_ADDR=10.98.24.41
SCHEMA_REGISTRY_LOAD_BALANCER_PORT_8081_TCP_PORT=8081
SCHEMA_REGISTRY_LOAD_BALANCER_PORT_8081_TCP_PROTO=tcp
SCHEMA_REGISTRY_LOAD_BALANCER_SERVICE_HOST=10.98.24.41
SCHEMA_REGISTRY_LOAD_BALANCER_SERVICE_PORT=8081
SCHEMA_REGISTRY_SERVICE_PORT=tcp://10.101.200.185:8081
SCHEMA_REGISTRY_SERVICE_PORT_8081_TCP=tcp://10.101.200.185:8081
SCHEMA_REGISTRY_SERVICE_PORT_8081_TCP_ADDR=10.101.200.185
SCHEMA_REGISTRY_SERVICE_PORT_8081_TCP_PORT=8081
SCHEMA_REGISTRY_SERVICE_PORT_8081_TCP_PROTO=tcp
SCHEMA_REGISTRY_SERVICE_SERVICE_HOST=10.101.200.185
SCHEMA_REGISTRY_SERVICE_SERVICE_PORT=8081
SHLVL=1
ZOOKEEPER_SERVICE_PORT=tcp://10.99.159.163:2181
ZOOKEEPER_SERVICE_PORT_2181_TCP=tcp://10.99.159.163:2181
ZOOKEEPER_SERVICE_PORT_2181_TCP_ADDR=10.99.159.163
ZOOKEEPER_SERVICE_PORT_2181_TCP_PORT=2181
ZOOKEEPER_SERVICE_PORT_2181_TCP_PROTO=tcp
ZOOKEEPER_SERVICE_SERVICE_HOST=10.99.159.163
ZOOKEEPER_SERVICE_SERVICE_PORT=2181
ZULU_OPENJDK_VERSION=8=8.30.0.1

===> User

echo "===> User"

echo "===> Configuring ..." /etc/confluent/docker/configure ===> Configuring ...

if [[ -n "${SCHEMA_REGISTRY_PORT-}" ]]
then
  echo "PORT is deprecated. Please use SCHEMA_REGISTRY_LISTENERS instead."
  exit 1
fi
+ [[ -n '' ]]

if [[ -n "${SCHEMA_REGISTRY_JMX_OPTS-}" ]]
then
  if [[ ! $SCHEMA_REGISTRY_JMX_OPTS == *"com.sun.management.jmxremote.rmi.port"*  ]]
  then
    echo "SCHEMA_REGISTRY_OPTS should contain 'com.sun.management.jmxremote.rmi.port' property. It is required for accessing the JMX metrics externally."
  fi
fi

echo "===> Running preflight checks ... " ===> Running preflight checks ...

[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=schema-registry-58b954b788-jpv7t
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_172
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.9.125-linuxkit
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper-service:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b1bc7ed
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper-service/10.99.159.163:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zookeeper-service/10.99.159.163:2181, initiating session
[main-SendThread(zookeeper-service:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper-service/10.99.159.163:2181, sessionid = 0x100000264810009, negotiated timeout = 40000
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x100000264810009 closed

===> Check if Kafka is healthy ... echo "===> Check if Kafka is healthy ..."

[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.0-cpNone [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : ca8d91be74ec83ed


echo "===> Launching ... "
+ echo '===> Launching ... '
exec /etc/confluent/docker/launch
===> Launching ... 
+ exec /etc/confluent/docker/launch
===> Launching schema-registry ... 

[2019-04-12 07:14:16,774] INFO SchemaRegistryConfig values: resource.extension.class = [] metric.reporters = [] kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit response.mediatype.default = application/vnd.schemaregistry.v1+json kafkastore.ssl.trustmanager.algorithm = PKIX inter.instance.protocol = http authentication.realm = ssl.keystore.type = JKS kafkastore.topic = _schemas metrics.jmx.prefix = kafka.schema.registry kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1 kafkastore.topic.replication.factor = 3 ssl.truststore.password = [hidden] kafkastore.timeout.ms = 500 host.name = schema-registry-service kafkastore.bootstrap.servers = [] schema.registry.zk.namespace = schema_registry kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8 kafkastore.sasl.kerberos.service.name = schema.registry.resource.extension.class = [] ssl.endpoint.identification.algorithm = compression.enable = false kafkastore.ssl.truststore.type = JKS avro.compatibility.level = backward kafkastore.ssl.protocol = TLS kafkastore.ssl.provider = kafkastore.ssl.truststore.location = response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json] kafkastore.ssl.keystore.type = JKS authentication.skip.paths = [] ssl.truststore.type = JKS kafkastore.ssl.truststore.password = [hidden] access.control.allow.origin = ssl.truststore.location = ssl.keystore.password = [hidden] port = 8081 kafkastore.ssl.keystore.location = metrics.tag.map = {} master.eligibility = true ssl.client.auth = false kafkastore.ssl.keystore.password = [hidden] websocket.path.prefix = /ws kafkastore.security.protocol = PLAINTEXT ssl.trustmanager.algorithm = authentication.method = NONE request.logger.name = io.confluent.rest-utils.requests ssl.key.password = [hidden] kafkastore.zk.session.timeout.ms = 30000 kafkastore.sasl.mechanism = GSSAPI kafkastore.sasl.kerberos.ticket.renew.jitter = 0.05 kafkastore.ssl.key.password = [hidden] zookeeper.set.acl = false schema.registry.inter.instance.protocol = authentication.roles = [] metrics.num.samples = 2 ssl.protocol = TLS schema.registry.group.id = schema-registry kafkastore.ssl.keymanager.algorithm = SunX509 kafkastore.connection.url = zookeeper-service:2181 debug = true listeners = [http://0.0.0.0:8081] kafkastore.group.id = ssl.provider = ssl.enabled.protocols = [] shutdown.graceful.ms = 1000 ssl.keystore.location = ssl.cipher.suites = [] kafkastore.ssl.endpoint.identification.algorithm = kafkastore.ssl.cipher.suites = access.control.allow.methods = kafkastore.sasl.kerberos.min.time.before.relogin = 60000 ssl.keymanager.algorithm = metrics.sample.window.ms = 30000 kafkastore.init.timeout.ms = 60000 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig) [2019-04-12 07:14:17,060] INFO Logging initialized @1451ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) [2019-04-12 07:14:19,177] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2019-04-12 07:14:19,184] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,184] INFO Client environment:host.name=schema-registry-58b954b788-jpv7t (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,184] INFO Client environment:java.version=1.8.0_172 (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,184] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,185] INFO Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,185] INFO Client environment:java.class.path=:/usr/bin/../package-schema-registry/target/kafka-schema-registry-package--development/share/java/schema-registry/*:/usr/bin/../share/java/confluent-common/zookeeper-3.4.13.jar:/usr/bin/../share/java/confluent-common/common-metrics-5.0.0.jar:/usr/bin/../share/java/confluent-common/log4j-1.2.17.jar:/usr/bin/../share/java/confluent-common/audience-annotations-0.5.0.jar:/usr/bin/../share/java/confluent-common/netty-3.10.6.Final.jar:/usr/bin/../share/java/confluent-common/jline-0.9.94.jar:/usr/bin/../share/java/confluent-common/slf4j-api-1.7.25.jar:/usr/bin/../share/java/confluent-common/zkclient-0.10.jar:/usr/bin/../share/java/confluent-common/common-config-5.0.0.jar:/usr/bin/../share/java/confluent-common/common-utils-5.0.0.jar:/usr/bin/../share/java/confluent-common/build-tools-5.0.0.jar:/usr/bin/../share/java/rest-utils/asm-tree-6.2.jar:/usr/bin/../share/java/rest-utils/activation-1.1.1.jar:/usr/bin/../share/java/rest-utils/javax.annotation-api-1.2.jar:/usr/bin/../share/java/rest-utils/jetty-security-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javax.inject-1.jar:/usr/bin/../share/java/rest-utils/javax-websocket-server-impl-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jetty-util-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/hk2-utils-2.5.0-b42.jar:/usr/bin/../share/java/rest-utils/javax.websocket-api-1.0.jar:/usr/bin/../share/java/rest-utils/hibernate-validator-5.1.3.Final.jar:/usr/bin/../share/java/rest-utils/jetty-server-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jetty-plus-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jersey-container-servlet-2.27.jar:/usr/bin/../share/java/rest-utils/websocket-api-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jetty-xml-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/hk2-locator-2.5.0-b42.jar:/usr/bin/../share/java/rest-utils/jackson-module-jaxb-annotations-2.9.6.jar:/usr/bin/../share/java/rest-utils/jersey-common-2.27.jar:/usr/bin/../share/java/rest-utils/jetty-client-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jboss-logging-3.1.3.GA.jar:/usr/bin/../share/java/rest-utils/jetty-jaas-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/asm-6.2.jar:/usr/bin/../share/java/rest-utils/jersey-container-servlet-core-2.27.jar:/usr/bin/../share/java/rest-utils/jetty-webapp-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/rest-utils/jackson-core-2.9.6.jar:/usr/bin/../share/java/rest-utils/jersey-server-2.27.jar:/usr/bin/../share/java/rest-utils/aopalliance-repackaged-2.5.0-b42.jar:/usr/bin/../share/java/rest-utils/jackson-annotations-2.9.6.jar:/usr/bin/../share/java/rest-utils/hk2-api-2.5.0-b42.jar:/usr/bin/../share/java/rest-utils/rest-utils-5.0.0.jar:/usr/bin/../share/java/rest-utils/jersey-media-jaxb-2.27.jar:/usr/bin/../share/java/rest-utils/javax.ws.rs-api-2.1.jar:/usr/bin/../share/java/rest-utils/jetty-servlet-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/asm-commons-6.2.jar:/usr/bin/../share/java/rest-utils/websocket-server-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/rest-utils/jackson-databind-2.9.6.jar:/usr/bin/../share/java/rest-utils/jersey-hk2-2.27.jar:/usr/bin/../share/java/rest-utils/jetty-continuation-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jetty-http-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jackson-jaxrs-json-provider-2.9.6.jar:/usr/bin/../share/java/rest-utils/javax.el-api-2.2.4.jar:/usr/bin/../share/java/rest-utils/jetty-jmx-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/websocket-common-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javassist-3.22.0-CR2.jar:/usr/bin/../share/java/rest-utils/classmate-1.0.0.jar:/usr/bin/../share/java/rest-utils/jetty-jndi-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jaxb-api-2.3.0.jar:/usr/bin/../share/java/rest-utils/asm-analysis-6.2.jar:/usr/bin/../share/java/rest-utils/websocket-servlet-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javax.el-2.2.4.jar:/usr/bin/../share/java/rest-utils/jersey-bean-validation-2.27.jar:/usr/bin/../share/java/rest-utils/jetty-servlets-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javax.websocket-client-api-1.0.jar:/usr/bin/../share/java/rest-utils/jackson-jaxrs-base-2.9.6.jar:/usr/bin/../share/java/rest-utils/javax-websocket-client-impl-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/javax.inject-2.5.0-b42.jar:/usr/bin/../share/java/rest-utils/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/rest-utils/jersey-client-2.27.jar:/usr/bin/../share/java/rest-utils/jetty-io-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/websocket-client-9.4.11.v20180605.jar:/usr/bin/../share/java/rest-utils/jetty-annotations-9.4.11.v20180605.jar:/usr/bin/../share/java/schema-registry/javax.annotation-api-1.2.jar:/usr/bin/../share/java/schema-registry/zookeeper-3.4.13.jar:/usr/bin/../share/java/schema-registry/confluent-licensing-new-5.0.0.jar:/usr/bin/../share/java/schema-registry/gson-2.7.jar:/usr/bin/../share/java/schema-registry/hibernate-validator-5.1.3.Final.jar:/usr/bin/../share/java/schema-registry/metrics-core-2.2.0.jar:/usr/bin/../share/java/schema-registry/confluent-schema-registry-security-plugin-5.0.0.jar:/usr/bin/../share/java/schema-registry/log4j-1.2.17.jar:/usr/bin/../share/java/schema-registry/jopt-simple-5.0.4.jar:/usr/bin/../share/java/schema-registry/avro-1.8.1.jar:/usr/bin/../share/java/schema-registry/audience-annotations-0.5.0.jar:/usr/bin/../share/java/schema-registry/scala-reflect-2.11.12.jar:/usr/bin/../share/java/schema-registry/jackson-mapper-asl-1.9.13.jar:/usr/bin/../share/java/schema-registry/netty-3.10.6.Final.jar:/usr/bin/../share/java/schema-registry/snappy-java-1.1.7.1.jar:/usr/bin/../share/java/schema-registry/jersey-common-2.27.jar:/usr/bin/../share/java/schema-registry/jboss-logging-3.1.3.GA.jar:/usr/bin/../share/java/schema-registry/jline-0.9.94.jar:/usr/bin/../share/java/schema-registry/kafka-schema-registry-client-5.0.0.jar:/usr/bin/../share/java/schema-registry/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/schema-registry/jackson-core-2.9.6.jar:/usr/bin/../share/java/schema-registry/jersey-server-2.27.jar:/usr/bin/../share/java/schema-registry/confluent-security-plugins-common-5.0.0.jar:/usr/bin/../share/java/schema-registry/jackson-annotations-2.9.6.jar:/usr/bin/../share/java/schema-registry/kafka-schema-registry-5.0.0.jar:/usr/bin/../share/java/schema-registry/protobuf-java-util-3.4.0.jar:/usr/bin/../share/java/schema-registry/slf4j-api-1.7.25.jar:/usr/bin/../share/java/schema-registry/jersey-media-jaxb-2.27.jar:/usr/bin/../share/java/schema-registry/javax.ws.rs-api-2.1.jar:/usr/bin/../share/java/schema-registry/jackson-core-asl-1.9.13.jar:/usr/bin/../share/java/schema-registry/jose4j-0.6.1.jar:/usr/bin/../share/java/schema-registry/jackson-databind-2.9.6.jar:/usr/bin/../share/java/schema-registry/commons-compress-1.8.1.jar:/usr/bin/../share/java/schema-registry/guava-20.0.jar:/usr/bin/../share/java/schema-registry/xz-1.5.jar:/usr/bin/../share/java/schema-registry/javax.el-api-2.2.4.jar:/usr/bin/../share/java/schema-registry/protobuf-java-3.4.0.jar:/usr/bin/../share/java/schema-registry/kafka_2.11-2.0.0-cp1.jar:/usr/bin/../share/java/schema-registry/paranamer-2.7.jar:/usr/bin/../share/java/schema-registry/classmate-1.0.0.jar:/usr/bin/../share/java/schema-registry/scala-logging_2.11-3.9.0.jar:/usr/bin/../share/java/schema-registry/javax.el-2.2.4.jar:/usr/bin/../share/java/schema-registry/jersey-bean-validation-2.27.jar:/usr/bin/../share/java/schema-registry/zkclient-0.10.jar:/usr/bin/../share/java/schema-registry/slf4j-log4j12-1.7.25.jar:/usr/bin/../share/java/schema-registry/confluent-serializers-new-5.0.0.jar:/usr/bin/../share/java/schema-registry/lz4-java-1.4.1.jar:/usr/bin/../share/java/schema-registry/javax.inject-2.5.0-b42.jar:/usr/bin/../share/java/schema-registry/kafka-clients-2.0.0-cp1.jar:/usr/bin/../share/java/schema-registry/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/schema-registry/scala-library-2.11.12.jar:/usr/bin/../share/java/schema-registry/jersey-client-2.27.jar:/usr/bin/../share/java/schema-registry/common-utils-5.0.0.jar (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,185] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,185] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,185] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,185] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,185] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,185] INFO Client environment:os.version=4.9.125-linuxkit (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,185] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,186] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,186] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,187] INFO Initiating client connection, connectString=zookeeper-service:2181 sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@1b11171f (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,202] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient) [2019-04-12 07:14:19,206] INFO Opening socket connection to server zookeeper-service/10.99.159.163:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:19,212] INFO Socket connection established to zookeeper-service/10.99.159.163:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:19,247] INFO Session establishment complete on server zookeeper-service/10.99.159.163:2181, sessionid = 0x10000026481000c, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:19,250] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient) [2019-04-12 07:14:19,264] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2019-04-12 07:14:19,424] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2019-04-12 07:14:19,456] INFO Session: 0x10000026481000c closed (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:19,456] INFO EventThread shut down for session: 0x10000026481000c (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:19,456] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://kafka-service:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore) [2019-04-12 07:14:19,482] INFO AdminClientConfig values: bootstrap.servers = [PLAINTEXT://kafka-service:9092] client.id = connections.max.idle.ms = 300000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 120000 retries = 5 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig) [2019-04-12 07:14:19,595] WARN The configuration 'connection.url' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [2019-04-12 07:14:19,599] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser) [2019-04-12 07:14:19,599] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser) [2019-04-12 07:14:19,970] INFO Validating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore) [2019-04-12 07:14:19,977] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore) [2019-04-12 07:14:20,064] INFO ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [PLAINTEXT://kafka-service:9092] buffer.memory = 33554432 client.id = compression.type = none confluent.batch.expiry.ms = 30000 connections.max.idle.ms = 540000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig) [2019-04-12 07:14:20,154] WARN The configuration 'connection.url' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [2019-04-12 07:14:20,154] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser) [2019-04-12 07:14:20,154] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser) [2019-04-12 07:14:20,174] INFO ConsumerConfig values: auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [PLAINTEXT://kafka-service:9092] check.crcs = true client.id = KafkaStore-reader-_schemas connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = schema-registry-schema-registry-service-8081 heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig) [2019-04-12 07:14:20,268] WARN The configuration 'connection.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig) [2019-04-12 07:14:20,268] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser) [2019-04-12 07:14:20,268] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser) [2019-04-12 07:14:20,292] INFO Cluster ID: _QAvlN5IR6KbD3xswC8PwQ (org.apache.kafka.clients.Metadata) [2019-04-12 07:14:20,295] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)

[2019-04-12 07:14:20,347] INFO [Consumer clientId=KafkaStore-reader-_schemas, groupId=schema-registry-schema-registry-service-8081] Resetting offset for partition _schemas-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2019-04-12 07:14:20,747] INFO Cluster ID: _QAvlN5IR6KbD3xswC8PwQ (org.apache.kafka.clients.Metadata) [2019-04-12 07:14:20,863] INFO Wait to catch up until the offset of the last message at 2 (io.confluent.kafka.schemaregistry.storage.KafkaStore) [2019-04-12 07:14:20,950] INFO Joining schema registry with Zookeeper-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry) [2019-04-12 07:14:20,958] INFO Initiating client connection, connectString=zookeeper-service:2181 sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@a4add54 (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:20,958] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2019-04-12 07:14:20,966] INFO Opening socket connection to server zookeeper-service/10.99.159.163:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:20,966] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient) [2019-04-12 07:14:20,967] INFO Socket connection established to zookeeper-service/10.99.159.163:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:20,978] INFO Session establishment complete on server zookeeper-service/10.99.159.163:2181, sessionid = 0x10000026481000d, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:20,979] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient) [2019-04-12 07:14:20,986] INFO Created schema registry namespace zookeeper-service:2181/schema_registry (io.confluent.kafka.schemaregistry.masterelector.zookeeper.ZookeeperMasterElector) [2019-04-12 07:14:20,986] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2019-04-12 07:14:21,001] INFO Session: 0x10000026481000d closed (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:21,001] INFO EventThread shut down for session: 0x10000026481000d (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:21,008] INFO Initiating client connection, connectString=zookeeper-service:2181/schema_registry sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@5ba88be8 (org.apache.zookeeper.ZooKeeper) [2019-04-12 07:14:21,008] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2019-04-12 07:14:21,015] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient) [2019-04-12 07:14:21,020] INFO Opening socket connection to server zookeeper-service/10.99.159.163:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:21,022] INFO Socket connection established to zookeeper-service/10.99.159.163:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:21,054] INFO Session establishment complete on server zookeeper-service/10.99.159.163:2181, sessionid = 0x10000026481000e, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn) [2019-04-12 07:14:21,054] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient) [2019-04-12 07:14:21,090] INFO Successfully elected the new master: {"host":"schema-registry-service","port":8081,"master_eligibility":true,"scheme":"http","version":1} (io.confluent.kafka.schemaregistry.masterelector.zookeeper.ZookeeperMasterElector) [2019-04-12 07:14:21,127] INFO Wait to catch up until the offset of the last message at 3 (io.confluent.kafka.schemaregistry.storage.KafkaStore) [2019-04-12 07:14:21,128] INFO /schema_registry_master exists with value {"host":"schema-registry-service","port":8081,"master_eligibility":true,"scheme":"http","version":1} during connection loss; this is ok (kafka.utils.ZkUtils) [2019-04-12 07:14:21,129] INFO Successfully elected the new master: {"host":"schema-registry-service","port":8081,"master_eligibility":true,"scheme":"http","version":1} (io.confluent.kafka.schemaregistry.masterelector.zookeeper.ZookeeperMasterElector) [2019-04-12 07:14:21,360] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.Application) [2019-04-12 07:14:22,259] INFO jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b01 (org.eclipse.jetty.server.Server) [2019-04-12 07:14:22,570] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) [2019-04-12 07:14:22,570] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) [2019-04-12 07:14:22,573] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) Apr 12, 2019 7:14:24 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource will be ignored. Apr 12, 2019 7:14:24 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource will be ignored. Apr 12, 2019 7:14:24 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource will be ignored. Apr 12, 2019 7:14:24 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource will be ignored. Apr 12, 2019 7:14:24 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource will be ignored. [2019-04-12 07:14:24,987] INFO HV000001: Hibernate Validator 5.1.3.Final (org.hibernate.validator.internal.util.Version) [2019-04-12 07:14:25,780] INFO Started o.e.j.s.ServletContextHandler@3d484181{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) [2019-04-12 07:14:25,868] INFO Started o.e.j.s.ServletContextHandler@53f0a4cb{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) [2019-04-12 07:14:25,955] INFO Started NetworkTrafficServerConnector@1477089c{HTTP/1.1,[http/1.1]}{0.0.0.0:8081} (org.eclipse.jetty.server.AbstractConnector) [2019-04-12 07:14:25,956] INFO Started @10409ms (org.eclipse.jetty.server.Server) [2019-04-12 07:14:25,957] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)



Interestingly the same plugin with same schemas, we were able to run the task and push to the schema registry.  This is happening in my local machine, the registry is running locally in my machine.

From the log, i couldn't identify anything related to schema validation at the server side. I suspect it brokes even before that may be at the client APIs.
ImFlog commented 5 years ago

It seems that your logs are only for boot time.

[2019-04-12 07:14:25,957] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain) Can you try to register the schema and give the line following this one ?

Overall, I think that it's a windows related issue but I would like it to work on every OS :)

ImFlog commented 5 years ago

@AmalVR can you give an update on this ? Have you been able to fetch the logs when the call is made ? We already understood that this is a windows related error so i'm not able to reproduce this. Without news from you in the next days I will close this task. Thank you :)

AmalVR commented 5 years ago

@ImFlog, In fact, I tried to collect the logs after trying to register the schema, however, I couldn't find any logs useful after I tried registering the schema. It seems like the call is not reaching the server and is breaking at the client side while processing the request. I suspect some problem happening while JSON( RegisterSchemaRequest) processing.

Also, I am not sure if this OS related, because, in a different machine which is windows, we were able to run the registerSchemaTask successfully.

ImFlog commented 5 years ago

@AmalVR the best way to see what could be wrong would be that you create a Github repository with all your files. Maybe I missed something.

ImFlog commented 5 years ago

It's been a month since my last comment. Any news @AmalVR ? Else I'll close this issue as it seems to be something out of the plugin scope.

AmalVR commented 5 years ago

Ya, you may close the incident. Thank you so much for your help.

On Fri, Jul 12, 2019, 1:13 PM Florian Garcia notifications@github.com wrote:

It's been a month since my last comment. Any news @AmalVR https://github.com/AmalVR ? Else I'll close this issue as it seems to be something out of the plugin scope.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ImFlog/schema-registry-plugin/issues/21?email_source=notifications&email_token=AHMPSEWRXSI6D3KWEDZQOD3P7AYZ3A5CNFSM4G76UP4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZY7LTI#issuecomment-510784973, or mute the thread https://github.com/notifications/unsubscribe-auth/AHMPSEQEYOTHSQVWL752DF3P7AYZ3ANCNFSM4G76UP4A .

ImFlog commented 5 years ago

My pleasure. Do not hesitates if you have any new issue ;)