Open hectorgarzon opened 7 years ago
@hectorgarzon Can you run docker logs
for the container to see if there is anything useful in the REST Proxy log?
docker logs kafka-rest gives this:
[18:51:09](develop)$ docker logs kafka-rest
echo "===> ENV Variables ..."
+ echo '===> ENV Variables ...'
env | sort
===> ENV Variables ...
+ env
+ sort
ACCESS_CONTROL_ALLOW_ORIGIN_DEFAULT=*
COMPONENT=kafka-rest
CONFLUENT_DEB_REPO=http://packages.confluent.io
CONFLUENT_DEB_VERSION=1
CONFLUENT_MAJOR_VERSION=3
CONFLUENT_MINOR_VERSION=0
CONFLUENT_PATCH_VERSION=1
CONFLUENT_VERSION=3.0.1
HOME=/root
HOSTNAME=4e53c563d08a
KAFKA_REST_LISTENERS=http://localhost:8082
KAFKA_REST_ZOOKEEPER_CONNECT=zookeeper1:2181
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.11
SHLVL=1
ZULU_OPENJDK_VERSION=8=8.15.0.1
_=/usr/bin/env
no_proxy=*.local, 169.254/16
echo "===> User"
+ echo '===> User'
id
+ id
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring ...
echo "===> Configuring ..."
+ echo '===> Configuring ...'
/etc/confluent/docker/configure
+ /etc/confluent/docker/configure
dub ensure KAFKA_REST_ZOOKEEPER_CONNECT
+ dub ensure KAFKA_REST_ZOOKEEPER_CONNECT
dub path /etc/"${COMPONENT}"/ writable
+ dub path /etc/kafka-rest/ writable
if [[ -n "${KAFKA_REST_PORT-}" ]]
then
echo "PORT is deprecated. Please use KAFKA_REST_LISTENERS instead."
exit 1
fi
+ [[ -n '' ]]
if [[ -n "${KAFKA_REST_JMX_OPTS-}" ]]
then
if [[ ! $KAFKA_REST_JMX_OPTS == *"com.sun.management.jmxremote.rmi.port"* ]]
then
echo "KAFKA_REST_OPTS should contain 'com.sun.management.jmxremote.rmi.port' property. It is required for accessing the JMX metrics externally."
fi
fi
+ [[ -n '' ]]
dub template "/etc/confluent/docker/${COMPONENT}.properties.template" "/etc/${COMPONENT}/${COMPONENT}.properties"
+ dub template /etc/confluent/docker/kafka-rest.properties.template /etc/kafka-rest/kafka-rest.properties
dub template "/etc/confluent/docker/log4j.properties.template" "/etc/${COMPONENT}/log4j.properties"
+ dub template /etc/confluent/docker/log4j.properties.template /etc/kafka-rest/log4j.properties
===> Running preflight checks ...
echo "===> Running preflight checks ... "
+ echo '===> Running preflight checks ... '
/etc/confluent/docker/ensure
+ /etc/confluent/docker/ensure
echo "===> Check if Zookeeper is healthy ..."
+ echo '===> Check if Zookeeper is healthy ...'
cub zk-ready "$KAFKA_REST_ZOOKEEPER_CONNECT" "${KAFKA_REST_CUB_ZK_TIMEOUT:-40}"
+ cub zk-ready zookeeper1:2181 40
===> Check if Zookeeper is healthy ...
Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
Client environment:host.name=4e53c563d08a
Client environment:java.version=1.8.0_92
Client environment:java.vendor=Azul Systems, Inc.
Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Client environment:java.io.tmpdir=/tmp
Client environment:java.compiler=<NA>
Client environment:os.name=Linux
Client environment:os.arch=amd64
Client environment:os.version=4.9.27-moby
Client environment:user.name=root
Client environment:user.home=/root
Client environment:user.dir=/
Initiating client connection, connectString=zookeeper1:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@14514713
Opening socket connection to server zookeeper1.bam2/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to zookeeper1.bam2/172.18.0.2:2181, initiating session
Session establishment complete on server zookeeper1.bam2/172.18.0.2:2181, sessionid = 0x15e377bdb2c0121, negotiated timeout = 40000
Session: 0x15e377bdb2c0121 closed
EventThread shut down
echo "===> Check if Kafka is healthy ..."
+ echo '===> Check if Kafka is healthy ...'
===> Check if Kafka is healthy ...
cub kafka-ready \
"${KAFKA_REST_CUB_KAFKA_MIN_BROKERS:-1}" \
"${KAFKA_REST_CUB_KAFKA_TIMEOUT:-40}" \
-z "$KAFKA_REST_ZOOKEEPER_CONNECT"
+ cub kafka-ready 1 40 -z zookeeper1:2181
Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
Client environment:host.name=4e53c563d08a
Client environment:java.version=1.8.0_92
Client environment:java.vendor=Azul Systems, Inc.
Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Client environment:java.io.tmpdir=/tmp
Client environment:java.compiler=<NA>
Client environment:os.name=Linux
Client environment:os.arch=amd64
Client environment:os.version=4.9.27-moby
Client environment:user.name=root
Client environment:user.home=/root
Client environment:user.dir=/
Initiating client connection, connectString=zookeeper1:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@4459eb14
Opening socket connection to server zookeeper1.bam2/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to zookeeper1.bam2/172.18.0.2:2181, initiating session
Session establishment complete on server zookeeper1.bam2/172.18.0.2:2181, sessionid = 0x15e377bdb2c0122, negotiated timeout = 40000
Session: 0x15e377bdb2c0122 closed
EventThread shut down
Initiating client connection, connectString=zookeeper1:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@15327b79
Opening socket connection to server zookeeper1.bam2/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to zookeeper1.bam2/172.18.0.2:2181, initiating session
Session establishment complete on server zookeeper1.bam2/172.18.0.2:2181, sessionid = 0x15e377bdb2c0123, negotiated timeout = 40000
Session: 0x15e377bdb2c0123 closed
EventThread shut down
MetadataClientConfig values:
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
sasl.kerberos.ticket.renew.window.factor = 0.8
ssl.keystore.location = null
bootstrap.servers = [kafka1:9092]
ssl.cipher.suites = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
ssl.truststore.type = JKS
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
security.protocol = PLAINTEXT
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
sasl.mechanism = GSSAPI
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
ssl.truststore.password = null
ssl.endpoint.identification.algorithm = null
echo "===> Launching ... "
+ echo '===> Launching ... '
exec /etc/confluent/docker/launch
+ exec /etc/confluent/docker/launch
===> Launching ...
===> Launching kafka-rest ...
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/java/kafka-rest/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2017-09-05 16:51:12,898] INFO KafkaRestConfig values:
simpleconsumer.pool.timeout.ms = 1000
metric.reporters = []
ssl.client.auth = false
consumer.iterator.timeout.ms = 1
response.mediatype.default = application/vnd.kafka.v1+json
ssl.keystore.type = JKS
ssl.trustmanager.algorithm =
schema.registry.url = http://localhost:8081
metrics.jmx.prefix = kafka.rest
request.logger.name = io.confluent.rest-utils.requests
ssl.key.password =
ssl.truststore.password =
id =
host.name =
consumer.request.max.bytes = 67108864
metrics.num.samples = 2
ssl.endpoint.identification.algorithm =
consumer.threads = 1
ssl.protocol = TLS
debug = false
listeners = [http://localhost:8082]
ssl.provider =
ssl.enabled.protocols = []
producer.threads = 5
shutdown.graceful.ms = 1000
ssl.keystore.location =
response.mediatype.preferred = [application/vnd.kafka.v1+json, application/vnd.kafka+json, application/json]
consumer.request.timeout.ms = 1000
ssl.cipher.suites = []
ssl.truststore.type = JKS
consumer.instance.timeout.ms = 300000
access.control.allow.methods =
consumer.iterator.backoff.ms = 50
access.control.allow.origin =
ssl.truststore.location =
ssl.keystore.password =
ssl.keymanager.algorithm =
zookeeper.connect = zookeeper1:2181
port = 8082
metrics.sample.window.ms = 30000
simpleconsumer.pool.size.max = 25
(io.confluent.kafkarest.KafkaRestConfig)
[2017-09-05 16:51:13,655] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-09-05 16:51:13,681] INFO Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,681] INFO Client environment:host.name=4e53c563d08a (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,681] INFO Client environment:java.version=1.8.0_92 (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,681] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,681] INFO Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,682] INFO Client environment:java.class.path=:/usr/bin/../target/kafka-rest-*-development/share/java/kafka-rest/*:/usr/bin/../share/java/confluent-common/netty-3.2.2.Final.jar:/usr/bin/../share/java/confluent-common/slf4j-log4j12-1.7.6.jar:/usr/bin/../share/java/confluent-common/jline-0.9.94.jar:/usr/bin/../share/java/confluent-common/common-config-3.0.1.jar:/usr/bin/../share/java/confluent-common/log4j-1.2.17.jar:/usr/bin/../share/java/confluent-common/zookeeper-3.4.3.jar:/usr/bin/../share/java/confluent-common/slf4j-api-1.7.6.jar:/usr/bin/../share/java/confluent-common/zkclient-0.5.jar:/usr/bin/../share/java/confluent-common/common-metrics-3.0.1.jar:/usr/bin/../share/java/confluent-common/common-utils-3.0.1.jar:/usr/bin/../share/java/rest-utils/hk2-locator-2.4.0-b25.jar:/usr/bin/../share/java/rest-utils/javax.inject-2.4.0-b25.jar:/usr/bin/../share/java/rest-utils/jetty-server-9.2.12.v20150709.jar:/usr/bin/../share/java/rest-utils/jersey-server-2.19.jar:/usr/bin/../share/java/rest-utils/jersey-container-servlet-core-2.19.jar:/usr/bin/../share/java/rest-utils/jersey-container-jetty-http-2.19.jar:/usr/bin/../share/java/rest-utils/jackson-databind-2.5.4.jar:/usr/bin/../share/java/rest-utils/jetty-continuation-9.2.12.v20150709.jar:/usr/bin/../share/java/rest-utils/jersey-common-2.19.jar:/usr/bin/../share/java/rest-utils/hk2-api-2.4.0-b25.jar:/usr/bin/../share/java/rest-utils/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/rest-utils/jackson-module-jaxb-annotations-2.5.4.jar:/usr/bin/../share/java/rest-utils/jackson-jaxrs-json-provider-2.5.4.jar:/usr/bin/../share/java/rest-utils/hibernate-validator-5.1.2.Final.jar:/usr/bin/../share/java/rest-utils/javassist-3.18.1-GA.jar:/usr/bin/../share/java/rest-utils/jetty-security-9.2.12.v20150709.jar:/usr/bin/../share/java/rest-utils/hk2-utils-2.4.0-b25.jar:/usr/bin/../share/java/rest-utils/aopalliance-repackaged-2.4.0-b25.jar:/usr/bin/../share/java/rest-utils/jetty-servlet-9.2.12.v20150709.jar:/usr/bin/../share/java/rest-utils/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/rest-utils/javax.el-api-2.2.4.jar:/usr/bin/../share/java/rest-utils/jetty-servlets-9.2.12.v20150709.jar:/usr/bin/../share/java/rest-utils/jetty-http-9.2.12.v20150709.jar:/usr/bin/../share/java/rest-utils/rest-utils-3.0.1.jar:/usr/bin/../share/java/rest-utils/jersey-bean-validation-2.19.jar:/usr/bin/../share/java/rest-utils/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/rest-utils/jetty-io-9.2.12.v20150709.jar:/usr/bin/../share/java/rest-utils/rest-utils-examples-3.0.1.jar:/usr/bin/../share/java/rest-utils/jboss-logging-3.1.3.GA.jar:/usr/bin/../share/java/rest-utils/jetty-util-9.2.12.v20150709.jar:/usr/bin/../share/java/rest-utils/jersey-test-framework-provider-jetty-2.19.jar:/usr/bin/../share/java/rest-utils/junit-4.12.jar:/usr/bin/../share/java/rest-utils/jersey-guava-2.19.jar:/usr/bin/../share/java/rest-utils/asm-debug-all-5.0.3.jar:/usr/bin/../share/java/rest-utils/javax.annotation-api-1.2.jar:/usr/bin/../share/java/rest-utils/jackson-jaxrs-base-2.5.4.jar:/usr/bin/../share/java/rest-utils/javax.ws.rs-api-2.0.1.jar:/usr/bin/../share/java/rest-utils/jersey-container-servlet-2.19.jar:/usr/bin/../share/java/rest-utils/jackson-annotations-2.5.4.jar:/usr/bin/../share/java/rest-utils/classmate-1.0.0.jar:/usr/bin/../share/java/rest-utils/javax.el-2.2.4.jar:/usr/bin/../share/java/rest-utils/jersey-media-jaxb-2.19.jar:/usr/bin/../share/java/rest-utils/rest-utils-test-3.0.1.jar:/usr/bin/../share/java/rest-utils/jersey-test-framework-core-2.19.jar:/usr/bin/../share/java/rest-utils/hamcrest-core-1.3.jar:/usr/bin/../share/java/rest-utils/jackson-core-2.5.4.jar:/usr/bin/../share/java/rest-utils/jersey-client-2.19.jar:/usr/bin/../share/java/kafka-rest/jackson-mapper-asl-1.9.13.jar:/usr/bin/../share/java/kafka-rest/scala-parser-combinators_2.11-1.0.4.jar:/usr/bin/../share/java/kafka-rest/lz4-1.3.0.jar:/usr/bin/../share/java/kafka-rest/slf4j-log4j12-1.7.21.jar:/usr/bin/../share/java/kafka-rest/zookeeper-3.4.6.jar:/usr/bin/../share/java/kafka-rest/activation-1.1.jar:/usr/bin/../share/java/kafka-rest/kafka-rest-3.0.1.jar:/usr/bin/../share/java/kafka-rest/jline-0.9.94.jar:/usr/bin/../share/java/kafka-rest/mail-1.4.jar:/usr/bin/../share/java/kafka-rest/kafka_2.11-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka-rest/jopt-simple-4.9.jar:/usr/bin/../share/java/kafka-rest/slf4j-api-1.7.21.jar:/usr/bin/../share/java/kafka-rest/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka-rest/netty-3.7.0.Final.jar:/usr/bin/../share/java/kafka-rest/commons-compress-1.4.1.jar:/usr/bin/../share/java/kafka-rest/avro-1.7.7.jar:/usr/bin/../share/java/kafka-rest/kafka-avro-serializer-3.0.1.jar:/usr/bin/../share/java/kafka-rest/log4j-1.2.15.jar:/usr/bin/../share/java/kafka-rest/kafka-json-serializer-3.0.1.jar:/usr/bin/../share/java/kafka-rest/paranamer-2.3.jar:/usr/bin/../share/java/kafka-rest/zkclient-0.8.jar:/usr/bin/../share/java/kafka-rest/jackson-core-asl-1.9.13.jar:/usr/bin/../share/java/kafka-rest/snappy-java-1.1.2.6.jar:/usr/bin/../share/java/kafka-rest/kafka-clients-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka-rest/kafka-schema-registry-client-3.0.1.jar:/usr/bin/../share/java/kafka-rest/xz-1.0.jar:/usr/bin/../share/java/kafka-rest/scala-library-2.11.8.jar (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,682] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,683] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,683] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,683] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,683] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,683] INFO Client environment:os.version=4.9.27-moby (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,683] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,683] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,683] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,688] INFO Initiating client connection, connectString=zookeeper1:2181 sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@29f69090 (org.apache.zookeeper.ZooKeeper)
[2017-09-05 16:51:13,753] INFO Opening socket connection to server /172.18.0.2:2181 (org.apache.zookeeper.ClientCnxn)
[2017-09-05 16:51:13,761] INFO Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2017-09-05 16:51:14,096] INFO Socket connection established to zookeeper1.bam2/172.18.0.2:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2017-09-05 16:51:14,142] INFO Session establishment complete on server zookeeper1.bam2/172.18.0.2:2181, sessionid = 0x15e377bdb2c0124, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn)
[2017-09-05 16:51:14,151] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2017-09-05 16:51:14,977] INFO ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [PLAINTEXT://kafka1:9092, SSL://kafka1:9093]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 60000
interceptor.classes = null
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 30000
acks = 1
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0
(org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,081] INFO ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [PLAINTEXT://kafka1:9092, SSL://kafka1:9093]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 60000
interceptor.classes = null
ssl.truststore.password = null
client.id = producer-1
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 30000
acks = 1
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0
(org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,092] WARN The configuration zookeeper.connect = zookeeper1:2181 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,092] WARN The configuration listeners = http://localhost:8082 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,095] INFO Kafka version : 0.10.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2017-09-05 16:51:15,095] INFO Kafka commitId : e7288edd541cee03 (org.apache.kafka.common.utils.AppInfoParser)
[2017-09-05 16:51:15,102] INFO KafkaJsonSerializerConfig values:
json.indent.output = false
(io.confluent.kafka.serializers.KafkaJsonSerializerConfig)
[2017-09-05 16:51:15,102] INFO KafkaJsonSerializerConfig values:
json.indent.output = false
(io.confluent.kafka.serializers.KafkaJsonSerializerConfig)
[2017-09-05 16:51:15,103] INFO ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [PLAINTEXT://kafka1:9092, SSL://kafka1:9093]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 60000
interceptor.classes = null
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 30000
acks = 1
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class io.confluent.kafka.serializers.KafkaJsonSerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class io.confluent.kafka.serializers.KafkaJsonSerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0
(org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,109] INFO ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [PLAINTEXT://kafka1:9092, SSL://kafka1:9093]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 60000
interceptor.classes = null
ssl.truststore.password = null
client.id = producer-2
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 30000
acks = 1
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class io.confluent.kafka.serializers.KafkaJsonSerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class io.confluent.kafka.serializers.KafkaJsonSerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0
(org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,109] WARN The configuration zookeeper.connect = zookeeper1:2181 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,109] WARN The configuration listeners = http://localhost:8082 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,109] INFO Kafka version : 0.10.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2017-09-05 16:51:15,109] INFO Kafka commitId : e7288edd541cee03 (org.apache.kafka.common.utils.AppInfoParser)
[2017-09-05 16:51:15,823] INFO KafkaAvroSerializerConfig values:
schema.registry.url = [http://localhost:8081]
max.schemas.per.subject = 1000
(io.confluent.kafka.serializers.KafkaAvroSerializerConfig)
[2017-09-05 16:51:15,856] INFO KafkaAvroSerializerConfig values:
schema.registry.url = [http://localhost:8081]
max.schemas.per.subject = 1000
(io.confluent.kafka.serializers.KafkaAvroSerializerConfig)
[2017-09-05 16:51:15,858] INFO ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [PLAINTEXT://kafka1:9092, SSL://kafka1:9093]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 60000
interceptor.classes = null
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 30000
acks = 1
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class io.confluent.kafka.serializers.KafkaAvroSerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class io.confluent.kafka.serializers.KafkaAvroSerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0
(org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,869] INFO ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [PLAINTEXT://kafka1:9092, SSL://kafka1:9093]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 60000
interceptor.classes = null
ssl.truststore.password = null
client.id = producer-3
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 30000
acks = 1
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class io.confluent.kafka.serializers.KafkaAvroSerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
max.request.size = 1048576
value.serializer = class io.confluent.kafka.serializers.KafkaAvroSerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0
(org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,869] WARN The configuration schema.registry.url = http://localhost:8081 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,869] WARN The configuration zookeeper.connect = zookeeper1:2181 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,869] WARN The configuration listeners = http://localhost:8082 was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2017-09-05 16:51:15,869] INFO Kafka version : 0.10.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2017-09-05 16:51:15,869] INFO Kafka commitId : e7288edd541cee03 (org.apache.kafka.common.utils.AppInfoParser)
[2017-09-05 16:51:15,903] INFO Verifying properties (kafka.utils.VerifiableProperties)
[2017-09-05 16:51:15,911] INFO Property group.id is overridden to (kafka.utils.VerifiableProperties)
[2017-09-05 16:51:15,912] WARN Property listeners is not valid (kafka.utils.VerifiableProperties)
[2017-09-05 16:51:15,912] INFO Property zookeeper.connect is overridden to (kafka.utils.VerifiableProperties)
[2017-09-05 16:51:15,926] INFO KafkaAvroDeserializerConfig values:
schema.registry.url = [http://localhost:8081]
max.schemas.per.subject = 1000
specific.avro.reader = false
(io.confluent.kafka.serializers.KafkaAvroDeserializerConfig)
[2017-09-05 16:51:15,936] INFO KafkaJsonDecoderConfig values:
json.fail.unknown.properties = true
(io.confluent.kafka.serializers.KafkaJsonDecoderConfig)
[2017-09-05 16:51:16,005] INFO Logging initialized @3831ms (org.eclipse.jetty.util.log)
[2017-09-05 16:51:16,039] INFO Adding listener: http://localhost:8082 (io.confluent.rest.Application)
[2017-09-05 16:51:16,169] INFO jetty-9.2.12.v20150709 (org.eclipse.jetty.server.Server)
[2017-09-05 16:51:17,235] INFO HV000001: Hibernate Validator 5.1.2.Final (org.hibernate.validator.internal.util.Version)
[2017-09-05 16:51:17,532] INFO Started o.e.j.s.ServletContextHandler@51e0301d{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2017-09-05 16:51:17,561] INFO Started NetworkTrafficServerConnector@260f2144{HTTP/1.1}{localhost:8082} (org.eclipse.jetty.server.NetworkTrafficServerConnector)
[2017-09-05 16:51:17,563] INFO Started @5389ms (org.eclipse.jetty.server.Server)
[2017-09-05 16:51:17,563] INFO Server started, listening for requests... (io.confluent.kafkarest.KafkaRestMain)
It looks like no requests are getting into the REST proxy itself, but the preflight checks against, e.g., Kafka, are working fine. Are you able to connect to Kafka directly, e.g. with console producer/consumer? Maybe docker info
would provide some more insight about what's going wrong? Can you provide the full output if you make a request with, e.g., curl -v
?
Curl command gives this:
> curl -v localhost:8082/topics
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8082 (#0)
> GET /topics HTTP/1.1
> Host: localhost:8082
> User-Agent: curl/7.51.0
> Accept: */*
>
* Curl_http_done: called premature == 0
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
@hectorgarzon , did you solved this problem ? If yes - tell me please how.
No, sorry..
BTW, seems, I solved it. Made all network bridge. In Kafka's, REST's, ... advertised listeners environment variables I specified not localhost, but name of service. For example:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka2:9093
KAFKA_REST_LISTENERS: http://rest:8082
, ...
And then shared that ports to host:
ports:
- 8082:8082
After if one service calls another - they use service's names as host names, if I need to call them from outside of docker - I use 'localhost:9093', etc.
Working docker-compose.yml: docker-compose.txt
I have kafka and zoopeeker running on a docker network called bam2. When I run:
docker run -d --net=bam2 --name=kafka-rest -e ACCESS_CONTROL_ALLOW_ORIGIN_DEFAULT="*" -e KAFKA_REST_ZOOKEEPER_CONNECT=zookeeper1:2181 -e KAFKA_REST_LISTENERS=http://localhost:8082 -p:8082:8082 confluentinc/cp-kafka-rest:3.0.1
and access to localhost:8082/topics it gives me ERR_EMPTY_RESPONSE. I can't make docker image for kafka-rest to work. If I run./bin/kafka-rest-start ./etc/kafka-rest/kafka-rest.properties
from confluent 3.3.0 REST API for Kafka works fine.What am I doing wrong?
Thanks