networknt / light-eventuate-4j

An eventual consistency framework based on Event Sourcing and CQRS on top of light-4j and Kafka
Apache License 2.0
59 stars 20 forks source link

rest-query cannot be started #27

Closed notesby closed 7 years ago

notesby commented 7 years ago

Hi, I am trying to do the tutorial but when I run the command

java -jar target/rest-query-1.0.0.jar

I am getting this error:

ERROR c.n.e.k.c.EventuateKafkaConsumer start - Error subscribing
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:765)
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:633)
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:615)
    at com.networknt.eventuate.kafka.consumer.EventuateKafkaConsumer.start(EventuateKafkaConsumer.java:79)
    at com.networknt.eventuate.client.KafkaAggregateSubscriptions.subscribe(KafkaAggregateSubscriptions.java:81)
    at com.networknt.eventuate.common.impl.EventuateAggregateStoreImpl.subscribe(EventuateAggregateStoreImpl.java:154)
    at com.networknt.eventuate.client.EventDispatcherInitializer.registerEventHandler(EventDispatcherInitializer.java:163)
    at com.networknt.eventuate.client.EventuateClientStartupHookProvider.onStartup(EventuateClientStartupHookProvider.java:48)
    at com.networknt.server.Server.start(Server.java:112)
    at com.networknt.server.Server.main(Server.java:101)
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
    at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:64)
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:691)
    ... 9 common frames omitted
Exception in thread "main" java.lang.RuntimeException: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
    at com.networknt.eventuate.kafka.consumer.EventuateKafkaConsumer.start(EventuateKafkaConsumer.java:131)
    at com.networknt.eventuate.client.KafkaAggregateSubscriptions.subscribe(KafkaAggregateSubscriptions.java:81)
    at com.networknt.eventuate.common.impl.EventuateAggregateStoreImpl.subscribe(EventuateAggregateStoreImpl.java:154)
    at com.networknt.eventuate.client.EventDispatcherInitializer.registerEventHandler(EventDispatcherInitializer.java:163)
    at com.networknt.eventuate.client.EventuateClientStartupHookProvider.onStartup(EventuateClientStartupHookProvider.java:48)
    at com.networknt.server.Server.start(Server.java:112)
    at com.networknt.server.Server.main(Server.java:101)
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:765)
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:633)
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:615)
    at com.networknt.eventuate.kafka.consumer.EventuateKafkaConsumer.start(EventuateKafkaConsumer.java:79)
    ... 6 more
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
    at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:64)
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:691)
    ... 9 more

kafka,zookeeper,mysql and the cdcserver are running, I am on a macbook pro docker version 17.06.0-ce-mac19

GavinChenYan commented 7 years ago

Based on the error, most likely it is Kafka DOCKER_HOST_IP issue for Mac:

https://docs.docker.com/docker-for-mac/networking/

Please following the "Setting DOCKER_HOST_IP for Mac" topic on tutorial to setup the DOCKER_HOST_IP: https://networknt.github.io/light-eventuate-4j/tutorial/service-dev/

Basically, we need run the following commands before start zookeeper, kafka and MySQL docker-compose:

sudo ifconfig lo0 alias 10.200.10.1/24 # (where 10.200.10.1 is some unused IP address) export DOCKER_HOST_IP=10.200.10.1

This is Mac only issue. And if we run the services in docker-compose, we don't need worry about the DOCKER_HOST_IP as well.

Another possible reason also mentioned on tutorial that cdc server should wait until Mysql, Zookeeper and Kafka compose are all started successfully:

Start CDC server Open another terminal to start CDC server with another docker-compose. Note: You have to wait until above Mysql, Zookeeper and Kafka compose are all started successfully before running the docker-compose-cdcserver.

We are working on the cdc re-try and will fix this issue on next release.

notesby commented 7 years ago

I ran those commands , but is not working

screen shot 2017-08-16 at 3 14 42 pm

screen shot 2017-08-16 at 3 11 44 pm

stevehu commented 7 years ago

This is a list of commands I used on Mac. You must run the docker-compose on the same terminal as you export DOCKER_HOST_IP.

cd ~/networknt/light-docker
sudo ifconfig lo0 alias 10.200.10.1/24
export DOCKER_HOST_IP=10.200.10.1
docker-compose -f docker-compose-eventuate.yml down
docker-compose -f docker-compose-eventuate.yml up 

You can verify if the environment variable is passed in by docker inspect [Kafka container id]

Let me know how it goes. Thanks.

notesby commented 7 years ago

this is what I get:


[
    {
        "Id": "d318625769916f42606be30c3f69f33333c15e77d8954d1e80f3cb77cdb4ac24",
        "Created": "2017-08-16T14:47:54.360867823Z",
        "Path": "/bin/sh",
        "Args": [
            "-c",
            "./run-kafka.sh"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 10359,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2017-08-16T14:47:58.255938775Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:d578870a732d4ead0e55b051dc3ee411eb5acdd64dfda59f44a084ae6b9cc51e",
        "ResolvConfPath": "/var/lib/docker/containers/d318625769916f42606be30c3f69f33333c15e77d8954d1e80f3cb77cdb4ac24/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/d318625769916f42606be30c3f69f33333c15e77d8954d1e80f3cb77cdb4ac24/hostname",
        "HostsPath": "/var/lib/docker/containers/d318625769916f42606be30c3f69f33333c15e77d8954d1e80f3cb77cdb4ac24/hosts",
        "LogPath": "/var/lib/docker/containers/d318625769916f42606be30c3f69f33333c15e77d8954d1e80f3cb77cdb4ac24/d318625769916f42606be30c3f69f33333c15e77d8954d1e80f3cb77cdb4ac24-json.log",
        "Name": "/lightdocker_kafka_1",
        "RestartCount": 0,
        "Driver": "overlay2",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "localnet",
            "PortBindings": {
                "9092/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "9092"
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": [],
            "CapAdd": null,
            "CapDrop": null,
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/aaadeeed9fe6decc386fa74ef334f204c9c09aa65900989451e88dd882085554-init/diff:/var/lib/docker/overlay2/b99dcb69c663a431ffdc8ce2c0a9a6eac5afbdfc50e464cc3637630897b42271/diff:/var/lib/docker/overlay2/c00021d579915f95ca4ec1a67b03548fa24f60697e0d459d34654f4782db5165/diff:/var/lib/docker/overlay2/c43dff991b5aea10d992acbc53333c52b8d8fb9503894dd5fba802504c0eef33/diff:/var/lib/docker/overlay2/947736647cec7a92ac7f408c39234d5bf43a37bb05ae6895f1aa8b0ab76cd7ed/diff:/var/lib/docker/overlay2/cf7329637af40c6d7e0396b347363f5dad9485f577bcae236005870dc35036a5/diff:/var/lib/docker/overlay2/3b8d67eacf928786acc1671dd865e51d660e4b899562395d69352cb0bd5cf782/diff:/var/lib/docker/overlay2/8c46721960b041d99acbae5e5fb7328810fc22fa246d684974acb0488fc0ede4/diff:/var/lib/docker/overlay2/497db0186d81a33f674d7f72a4218c23f8b34971ee00820cb3685590c89defaf/diff:/var/lib/docker/overlay2/b1425a6f4e7c9e052053806e56afccc62898b8f62c48c37978f139812c74ddf9/diff:/var/lib/docker/overlay2/5c4b988b962095e0bbba69acad398a923f21b7a09130b88ad971bedbb7c09631/diff:/var/lib/docker/overlay2/316e0e3b9806758a27677583f59a5f446901d2c13e426ee3660feacca6af3834/diff:/var/lib/docker/overlay2/fb8bb69c58feb77283240ea63fd46a869ddd98030617f4102951a7db49f0099e/diff",
                "MergedDir": "/var/lib/docker/overlay2/aaadeeed9fe6decc386fa74ef334f204c9c09aa65900989451e88dd882085554/merged",
                "UpperDir": "/var/lib/docker/overlay2/aaadeeed9fe6decc386fa74ef334f204c9c09aa65900989451e88dd882085554/diff",
                "WorkDir": "/var/lib/docker/overlay2/aaadeeed9fe6decc386fa74ef334f204c9c09aa65900989451e88dd882085554/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "volume",
                "Name": "bb0f6e4583bd22221d782ae9b85f96171e02e818e548547d456bc54c62938069",
                "Source": "/var/lib/docker/volumes/bb0f6e4583bd22221d782ae9b85f96171e02e818e548547d456bc54c62938069/_data",
                "Destination": "/usr/local/kafka-config",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "d31862576991",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "9092/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "ADVERTISED_HOST_NAME=10.200.10.1",
                "ZOOKEEPER_SERVERS=zookeeper:2181",
                "KAFKA_HEAP_OPTS=-Xmx320m -Xms320m",
                "no_proxy=*.local, 169.254/16",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "LANG=C.UTF-8",
                "JAVA_HOME=/docker-java-home",
                "JAVA_VERSION=8u141",
                "JAVA_DEBIAN_VERSION=8u141-b15-1~deb9u1",
                "CA_CERTIFICATES_JAVA_VERSION=20170531+nmu1"
            ],
            "Cmd": [
                "/bin/sh",
                "-c",
                "./run-kafka.sh"
            ],
            "ArgsEscaped": true,
            "Image": "networknt/eventuate-kafka:latest",
            "Volumes": {
                "/usr/local/kafka-config": {}
            },
            "WorkingDir": "/usr/local/kafka_2.11-0.11.0.0",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {
                "com.docker.compose.config-hash": "c2622f25ca6320489448f5ccb751603d8517533c05ff45a7bcfdb05ce34dac71",
                "com.docker.compose.container-number": "1",
                "com.docker.compose.oneoff": "False",
                "com.docker.compose.project": "lightdocker",
                "com.docker.compose.service": "kafka",
                "com.docker.compose.version": "1.14.0"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "03b8ec56895089cfec3e31264e8077bc56bba82eb66798ffaa892057b6c538b1",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "9092/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "9092"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/03b8ec568950",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "localnet": {
                    "IPAMConfig": null,
                    "Links": [
                        "lightdocker_zookeeper_1:lightdocker_zookeeper_1",
                        "lightdocker_zookeeper_1:zookeeper",
                        "lightdocker_zookeeper_1:zookeeper_1"
                    ],
                    "Aliases": [
                        "d31862576991",
                        "kafka"
                    ],
                    "NetworkID": "c7aca0878367e751c731a54091b6ac569a60b72f37a7c7c915881c4e64d9825a",
                    "EndpointID": "7ca7238bdbb4d12425b9d158357ec50bb9e8565f966bb44c73fe13d4fd3442eb",
                    "Gateway": "172.18.0.1",
                    "IPAddress": "172.18.0.4",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:12:00:04",
                    "DriverOpts": null
                }
            }
        }
    }
]

and this is what I get when I start the services

light-docker hector$ echo $DOCKER_HOST_IP
10.200.10.1
light-docker hector$ docker-compose -f docker-compose-eventuate.yml up
Creating lightdocker_mysql_1 ... 
Creating lightdocker_zookeeper_1 ... 
Creating lightdocker_zookeeper_1
Creating lightdocker_zookeeper_1 ... done
Creating lightdocker_kafka_1 ... 
Creating lightdocker_kafka_1 ... done
Attaching to lightdocker_mysql_1, lightdocker_zookeeper_1, lightdocker_kafka_1
mysql_1      | Initializing database
kafka_1      | ADVERTISED_HOST_NAME=10.200.10.1
zookeeper_1  | ZooKeeper JMX enabled by default
mysql_1      | 2017-08-16T19:49:50.444763Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
mysql_1      | 2017-08-16T19:49:53.328377Z 0 [Warning] InnoDB: New log files created, LSN=45790
mysql_1      | 2017-08-16T19:49:53.716451Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
mysql_1      | 2017-08-16T19:49:53.792186Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 12bce6a2-82bc-11e7-a165-0242ac120003.
mysql_1      | 2017-08-16T19:49:53.794506Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
mysql_1      | 2017-08-16T19:49:53.796854Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
zookeeper_1  | Using config: /conf/zoo.cfg
zookeeper_1  | 2017-08-16 19:49:51,255 [myid:] - INFO  [main:QuorumPeerConfig@134] - Reading configuration from: /conf/zoo.cfg
zookeeper_1  | 2017-08-16 19:49:51,266 [myid:] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
zookeeper_1  | 2017-08-16 19:49:51,267 [myid:] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0
zookeeper_1  | 2017-08-16 19:49:51,267 [myid:] - INFO  [main:DatadirCleanupManager@101] - Purge task is not scheduled.
zookeeper_1  | 2017-08-16 19:49:51,269 [myid:] - WARN  [main:QuorumPeerMain@113] - Either no config or no quorum defined in config, running  in standalone mode
zookeeper_1  | 2017-08-16 19:49:51,291 [myid:] - INFO  [main:QuorumPeerConfig@134] - Reading configuration from: /conf/zoo.cfg
zookeeper_1  | 2017-08-16 19:49:51,292 [myid:] - INFO  [main:ZooKeeperServerMain@96] - Starting server
zookeeper_1  | 2017-08-16 19:49:51,310 [myid:] - INFO  [main:Environment@100] - Server environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
zookeeper_1  | 2017-08-16 19:49:51,310 [myid:] - INFO  [main:Environment@100] - Server environment:host.name=285f0cbb54cb
zookeeper_1  | 2017-08-16 19:49:51,311 [myid:] - INFO  [main:Environment@100] - Server environment:java.version=1.8.0_131
zookeeper_1  | 2017-08-16 19:49:51,311 [myid:] - INFO  [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
zookeeper_1  | 2017-08-16 19:49:51,312 [myid:] - INFO  [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre
zookeeper_1  | 2017-08-16 19:49:51,312 [myid:] - INFO  [main:Environment@100] - Server environment:java.class.path=/zookeeper-3.4.10/bin/../build/classes:/zookeeper-3.4.10/bin/../build/lib/*.jar:/zookeeper-3.4.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper-3.4.10/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper-3.4.10/bin/../lib/netty-3.10.5.Final.jar:/zookeeper-3.4.10/bin/../lib/log4j-1.2.16.jar:/zookeeper-3.4.10/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.10/bin/../zookeeper-3.4.10.jar:/zookeeper-3.4.10/bin/../src/java/lib/*.jar:/conf:
zookeeper_1  | 2017-08-16 19:49:51,313 [myid:] - INFO  [main:Environment@100] - Server environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
zookeeper_1  | 2017-08-16 19:49:51,314 [myid:] - INFO  [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
zookeeper_1  | 2017-08-16 19:49:51,315 [myid:] - INFO  [main:Environment@100] - Server environment:java.compiler=<NA>
zookeeper_1  | 2017-08-16 19:49:51,319 [myid:] - INFO  [main:Environment@100] - Server environment:os.name=Linux
zookeeper_1  | 2017-08-16 19:49:51,319 [myid:] - INFO  [main:Environment@100] - Server environment:os.arch=amd64
zookeeper_1  | 2017-08-16 19:49:51,320 [myid:] - INFO  [main:Environment@100] - Server environment:os.version=4.9.36-moby
zookeeper_1  | 2017-08-16 19:49:51,320 [myid:] - INFO  [main:Environment@100] - Server environment:user.name=zookeeper
zookeeper_1  | 2017-08-16 19:49:51,320 [myid:] - INFO  [main:Environment@100] - Server environment:user.home=/home/zookeeper
zookeeper_1  | 2017-08-16 19:49:51,321 [myid:] - INFO  [main:Environment@100] - Server environment:user.dir=/zookeeper-3.4.10
zookeeper_1  | 2017-08-16 19:49:51,335 [myid:] - INFO  [main:ZooKeeperServer@829] - tickTime set to 2000
zookeeper_1  | 2017-08-16 19:49:51,335 [myid:] - INFO  [main:ZooKeeperServer@838] - minSessionTimeout set to -1
zookeeper_1  | 2017-08-16 19:49:51,336 [myid:] - INFO  [main:ZooKeeperServer@847] - maxSessionTimeout set to -1
zookeeper_1  | 2017-08-16 19:49:51,360 [myid:] - INFO  [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
mysql_1      | 2017-08-16T19:49:55.274800Z 1 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:55.274893Z 1 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:55.274920Z 1 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:55.275026Z 1 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:55.275067Z 1 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:55.275177Z 1 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:55.275696Z 1 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:55.275831Z 1 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
kafka_1      | [2017-08-16 19:49:56,218] INFO KafkaConfig values: 
kafka_1      |  advertised.host.name = null
kafka_1      |  advertised.listeners = PLAINTEXT://10.200.10.1:9092
kafka_1      |  advertised.port = null
kafka_1      |  alter.config.policy.class.name = null
kafka_1      |  authorizer.class.name = 
kafka_1      |  auto.create.topics.enable = true
kafka_1      |  auto.leader.rebalance.enable = true
kafka_1      |  background.threads = 10
kafka_1      |  broker.id = 0
kafka_1      |  broker.id.generation.enable = true
kafka_1      |  broker.rack = null
kafka_1      |  compression.type = producer
kafka_1      |  connections.max.idle.ms = 600000
kafka_1      |  controlled.shutdown.enable = true
kafka_1      |  controlled.shutdown.max.retries = 3
kafka_1      |  controlled.shutdown.retry.backoff.ms = 5000
kafka_1      |  controller.socket.timeout.ms = 30000
kafka_1      |  create.topic.policy.class.name = null
kafka_1      |  default.replication.factor = 1
kafka_1      |  delete.records.purgatory.purge.interval.requests = 1
kafka_1      |  delete.topic.enable = false
kafka_1      |  fetch.purgatory.purge.interval.requests = 1000
kafka_1      |  group.initial.rebalance.delay.ms = 0
kafka_1      |  group.max.session.timeout.ms = 300000
kafka_1      |  group.min.session.timeout.ms = 6000
kafka_1      |  host.name = 
kafka_1      |  inter.broker.listener.name = null
kafka_1      |  inter.broker.protocol.version = 0.11.0-IV2
kafka_1      |  leader.imbalance.check.interval.seconds = 300
kafka_1      |  leader.imbalance.per.broker.percentage = 10
kafka_1      |  listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
kafka_1      |  listeners = null
kafka_1      |  log.cleaner.backoff.ms = 15000
kafka_1      |  log.cleaner.dedupe.buffer.size = 134217728
kafka_1      |  log.cleaner.delete.retention.ms = 86400000
kafka_1      |  log.cleaner.enable = true
kafka_1      |  log.cleaner.io.buffer.load.factor = 0.9
kafka_1      |  log.cleaner.io.buffer.size = 524288
kafka_1      |  log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka_1      |  log.cleaner.min.cleanable.ratio = 0.5
kafka_1      |  log.cleaner.min.compaction.lag.ms = 0
kafka_1      |  log.cleaner.threads = 1
kafka_1      |  log.cleanup.policy = [delete]
kafka_1      |  log.dir = /tmp/kafka-logs
kafka_1      |  log.dirs = /tmp/kafka-logs
kafka_1      |  log.flush.interval.messages = 9223372036854775807
kafka_1      |  log.flush.interval.ms = null
kafka_1      |  log.flush.offset.checkpoint.interval.ms = 60000
kafka_1      |  log.flush.scheduler.interval.ms = 9223372036854775807
kafka_1      |  log.flush.start.offset.checkpoint.interval.ms = 60000
kafka_1      |  log.index.interval.bytes = 4096
kafka_1      |  log.index.size.max.bytes = 10485760
kafka_1      |  log.message.format.version = 0.11.0-IV2
kafka_1      |  log.message.timestamp.difference.max.ms = 9223372036854775807
kafka_1      |  log.message.timestamp.type = CreateTime
kafka_1      |  log.preallocate = false
kafka_1      |  log.retention.bytes = -1
kafka_1      |  log.retention.check.interval.ms = 300000
kafka_1      |  log.retention.hours = 168
kafka_1      |  log.retention.minutes = null
kafka_1      |  log.retention.ms = null
kafka_1      |  log.roll.hours = 168
kafka_1      |  log.roll.jitter.hours = 0
kafka_1      |  log.roll.jitter.ms = null
kafka_1      |  log.roll.ms = null
kafka_1      |  log.segment.bytes = 1073741824
kafka_1      |  log.segment.delete.delay.ms = 60000
kafka_1      |  max.connections.per.ip = 2147483647
kafka_1      |  max.connections.per.ip.overrides = 
kafka_1      |  message.max.bytes = 1000012
kafka_1      |  metric.reporters = []
kafka_1      |  metrics.num.samples = 2
kafka_1      |  metrics.recording.level = INFO
kafka_1      |  metrics.sample.window.ms = 30000
kafka_1      |  min.insync.replicas = 1
kafka_1      |  num.io.threads = 8
kafka_1      |  num.network.threads = 3
kafka_1      |  num.partitions = 1
kafka_1      |  num.recovery.threads.per.data.dir = 1
kafka_1      |  num.replica.fetchers = 1
kafka_1      |  offset.metadata.max.bytes = 4096
kafka_1      |  offsets.commit.required.acks = -1
kafka_1      |  offsets.commit.timeout.ms = 5000
kafka_1      |  offsets.load.buffer.size = 5242880
kafka_1      |  offsets.retention.check.interval.ms = 600000
kafka_1      |  offsets.retention.minutes = 1440
kafka_1      |  offsets.topic.compression.codec = 0
kafka_1      |  offsets.topic.num.partitions = 50
kafka_1      |  offsets.topic.replication.factor = 1
kafka_1      |  offsets.topic.segment.bytes = 104857600
kafka_1      |  port = 9092
kafka_1      |  principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
kafka_1      |  producer.purgatory.purge.interval.requests = 1000
kafka_1      |  queued.max.requests = 500
kafka_1      |  quota.consumer.default = 9223372036854775807
kafka_1      |  quota.producer.default = 9223372036854775807
kafka_1      |  quota.window.num = 11
kafka_1      |  quota.window.size.seconds = 1
kafka_1      |  replica.fetch.backoff.ms = 1000
kafka_1      |  replica.fetch.max.bytes = 1048576
kafka_1      |  replica.fetch.min.bytes = 1
kafka_1      |  replica.fetch.response.max.bytes = 10485760
kafka_1      |  replica.fetch.wait.max.ms = 500
kafka_1      |  replica.high.watermark.checkpoint.interval.ms = 5000
kafka_1      |  replica.lag.time.max.ms = 10000
kafka_1      |  replica.socket.receive.buffer.bytes = 65536
kafka_1      |  replica.socket.timeout.ms = 30000
kafka_1      |  replication.quota.window.num = 11
kafka_1      |  replication.quota.window.size.seconds = 1
kafka_1      |  request.timeout.ms = 30000
kafka_1      |  reserved.broker.max.id = 1000
kafka_1      |  sasl.enabled.mechanisms = [GSSAPI]
kafka_1      |  sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka_1      |  sasl.kerberos.min.time.before.relogin = 60000
kafka_1      |  sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka_1      |  sasl.kerberos.service.name = null
kafka_1      |  sasl.kerberos.ticket.renew.jitter = 0.05
kafka_1      |  sasl.kerberos.ticket.renew.window.factor = 0.8
kafka_1      |  sasl.mechanism.inter.broker.protocol = GSSAPI
kafka_1      |  security.inter.broker.protocol = PLAINTEXT
kafka_1      |  socket.receive.buffer.bytes = 102400
kafka_1      |  socket.request.max.bytes = 104857600
kafka_1      |  socket.send.buffer.bytes = 102400
kafka_1      |  ssl.cipher.suites = null
kafka_1      |  ssl.client.auth = none
kafka_1      |  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka_1      |  ssl.endpoint.identification.algorithm = null
kafka_1      |  ssl.key.password = null
kafka_1      |  ssl.keymanager.algorithm = SunX509
kafka_1      |  ssl.keystore.location = null
kafka_1      |  ssl.keystore.password = null
kafka_1      |  ssl.keystore.type = JKS
kafka_1      |  ssl.protocol = TLS
kafka_1      |  ssl.provider = null
kafka_1      |  ssl.secure.random.implementation = null
kafka_1      |  ssl.trustmanager.algorithm = PKIX
kafka_1      |  ssl.truststore.location = null
kafka_1      |  ssl.truststore.password = null
kafka_1      |  ssl.truststore.type = JKS
kafka_1      |  transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka_1      |  transaction.max.timeout.ms = 900000
kafka_1      |  transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka_1      |  transaction.state.log.load.buffer.size = 5242880
kafka_1      |  transaction.state.log.min.isr = 1
kafka_1      |  transaction.state.log.num.partitions = 50
kafka_1      |  transaction.state.log.replication.factor = 1
kafka_1      |  transaction.state.log.segment.bytes = 104857600
kafka_1      |  transactional.id.expiration.ms = 604800000
kafka_1      |  unclean.leader.election.enable = false
kafka_1      |  zookeeper.connect = zookeeper:2181
kafka_1      |  zookeeper.connection.timeout.ms = 6000
kafka_1      |  zookeeper.session.timeout.ms = 6000
kafka_1      |  zookeeper.set.acl = false
kafka_1      |  zookeeper.sync.time.ms = 2000
kafka_1      |  (kafka.server.KafkaConfig)
kafka_1      | [2017-08-16 19:49:56,356] INFO starting (kafka.server.KafkaServer)
kafka_1      | [2017-08-16 19:49:56,367] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
kafka_1      | [2017-08-16 19:49:56,389] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
kafka_1      | [2017-08-16 19:49:56,402] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,402] INFO Client environment:host.name=472f9f24c4fd (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,402] INFO Client environment:java.version=1.8.0_141 (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,403] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,404] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,404] INFO Client environment:java.class.path=:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/argparse4j-0.7.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/commons-lang3-3.5.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/connect-api-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/connect-file-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/connect-json-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/connect-runtime-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/connect-transforms-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/guava-20.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/hk2-api-2.5.0-b05.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/hk2-locator-2.5.0-b05.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/hk2-utils-2.5.0-b05.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jackson-annotations-2.8.5.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jackson-core-2.8.5.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jackson-databind-2.8.5.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/javassist-3.21.0-GA.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/javax.inject-1.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/javax.inject-2.5.0-b05.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/javax.ws.rs-api-2.0.1.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jersey-client-2.24.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jersey-common-2.24.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jersey-container-servlet-2.24.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jersey-container-servlet-core-2.24.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jersey-guava-2.24.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jersey-media-jaxb-2.24.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jersey-server-2.24.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jetty-http-9.2.15.v20160210.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jetty-io-9.2.15.v20160210.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jetty-security-9.2.15.v20160210.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jetty-server-9.2.15.v20160210.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jetty-util-9.2.15.v20160210.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/jopt-simple-5.0.3.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/kafka-clients-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/kafka-log4j-appender-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/kafka-streams-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/kafka-streams-examples-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/kafka-tools-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/kafka_2.11-0.11.0.0-sources.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/kafka_2.11-0.11.0.0-test-sources.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/kafka_2.11-0.11.0.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/log4j-1.2.17.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/lz4-1.3.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/maven-artifact-3.5.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/metrics-core-2.2.0.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/plexus-utils-3.0.24.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/reflections-0.9.11.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/rocksdbjni-5.0.1.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/scala-library-2.11.11.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/slf4j-api-1.7.25.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/slf4j-log4j12-1.7.25.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/snappy-java-1.1.2.6.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/zkclient-0.10.jar:/usr/local/kafka_2.11-0.11.0.0/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,405] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,406] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,406] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,406] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,407] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,407] INFO Client environment:os.version=4.9.36-moby (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,407] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,407] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,407] INFO Client environment:user.dir=/usr/local/kafka_2.11-0.11.0.0 (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,410] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@4567f35d (org.apache.zookeeper.ZooKeeper)
kafka_1      | [2017-08-16 19:49:56,446] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
kafka_1      | [2017-08-16 19:49:56,449] INFO Opening socket connection to server lightdocker_zookeeper_1.localnet/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
kafka_1      | [2017-08-16 19:49:56,467] INFO Socket connection established to lightdocker_zookeeper_1.localnet/172.18.0.2:2181, initiating session (org.apache.zookeeper.ClientCnxn)
zookeeper_1  | 2017-08-16 19:49:56,471 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.18.0.4:32940
zookeeper_1  | 2017-08-16 19:49:56,495 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@942] - Client attempting to establish new session at /172.18.0.4:32940
zookeeper_1  | 2017-08-16 19:49:56,501 [myid:] - INFO  [SyncThread:0:FileTxnLog@203] - Creating new log file: log.1
zookeeper_1  | 2017-08-16 19:49:56,532 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@687] - Established session 0x15dec9850990000 with negotiated timeout 6000 for client /172.18.0.4:32940
kafka_1      | [2017-08-16 19:49:56,540] INFO Session establishment complete on server lightdocker_zookeeper_1.localnet/172.18.0.2:2181, sessionid = 0x15dec9850990000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka_1      | [2017-08-16 19:49:56,543] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
zookeeper_1  | 2017-08-16 19:49:56,606 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@648] - Got user-level KeeperException when processing sessionid:0x15dec9850990000 type:create cxid:0x5 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
zookeeper_1  | 2017-08-16 19:49:56,637 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@648] - Got user-level KeeperException when processing sessionid:0x15dec9850990000 type:create cxid:0xb zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
zookeeper_1  | 2017-08-16 19:49:56,660 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@648] - Got user-level KeeperException when processing sessionid:0x15dec9850990000 type:create cxid:0x13 zxid:0xc txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
zookeeper_1  | 2017-08-16 19:49:56,739 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@648] - Got user-level KeeperException when processing sessionid:0x15dec9850990000 type:create cxid:0x1d zxid:0x12 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster
kafka_1      | [2017-08-16 19:49:56,749] INFO Cluster ID = _-7S8XwNRyq3gIurHCm_hQ (kafka.server.KafkaServer)
kafka_1      | [2017-08-16 19:49:56,759] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1      | [2017-08-16 19:49:56,823] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1      | [2017-08-16 19:49:56,830] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1      | [2017-08-16 19:49:56,839] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1      | [2017-08-16 19:49:56,893] INFO Log directory '/tmp/kafka-logs' not found, creating it. (kafka.log.LogManager)
kafka_1      | [2017-08-16 19:49:56,915] INFO Loading logs. (kafka.log.LogManager)
kafka_1      | [2017-08-16 19:49:56,932] INFO Logs loading complete in 17 ms. (kafka.log.LogManager)
mysql_1      | Database initialized
mysql_1      | Initializing certificates
mysql_1      | Generating a 2048 bit RSA private key
kafka_1      | [2017-08-16 19:49:57,035] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_1      | [2017-08-16 19:49:57,044] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
mysql_1      | ..............+++
mysql_1      | .+++
mysql_1      | unable to write 'random state'
mysql_1      | writing new private key to 'ca-key.pem'
mysql_1      | -----
mysql_1      | Generating a 2048 bit RSA private key
mysql_1      | ...........+++
kafka_1      | [2017-08-16 19:49:57,177] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
mysql_1      | ....................................+++
mysql_1      | unable to write 'random state'
mysql_1      | writing new private key to 'server-key.pem'
mysql_1      | -----
kafka_1      | [2017-08-16 19:49:57,187] INFO [Socket Server on Broker 0], Started 1 acceptor threads (kafka.network.SocketServer)
kafka_1      | [2017-08-16 19:49:57,218] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
mysql_1      | Generating a 2048 bit RSA private key
kafka_1      | [2017-08-16 19:49:57,226] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | [2017-08-16 19:49:57,234] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
mysql_1      | ..............................+++
mysql_1      | .........+++
mysql_1      | unable to write 'random state'
mysql_1      | writing new private key to 'client-key.pem'
mysql_1      | -----
kafka_1      | [2017-08-16 19:49:57,390] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | [2017-08-16 19:49:57,406] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
kafka_1      | [2017-08-16 19:49:57,427] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | [2017-08-16 19:49:57,416] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1      | [2017-08-16 19:49:57,451] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
mysql_1      | Certificates initialized
zookeeper_1  | 2017-08-16 19:49:57,519 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@648] - Got user-level KeeperException when processing sessionid:0x15dec9850990000 type:setData cxid:0x28 zxid:0x16 txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
mysql_1      | MySQL init process in progress...
kafka_1      | [2017-08-16 19:49:57,556] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2017-08-16 19:49:57,603] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka_1      | [2017-08-16 19:49:57,610] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2017-08-16 19:49:57,654] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
mysql_1      | 2017-08-16T19:49:57.747826Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
mysql_1      | 2017-08-16T19:49:57.749833Z 0 [Note] mysqld (mysqld 5.7.19-log) starting as process 88 ...
mysql_1      | 2017-08-16T19:49:57.771724Z 0 [Note] InnoDB: PUNCH HOLE support available
kafka_1      | [2017-08-16 19:49:57,776] INFO [Transaction Coordinator 0]: Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
mysql_1      | 2017-08-16T19:49:57.777676Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
mysql_1      | 2017-08-16T19:49:57.778113Z 0 [Note] InnoDB: Uses event mutexes
mysql_1      | 2017-08-16T19:49:57.778515Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
mysql_1      | 2017-08-16T19:49:57.778948Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
mysql_1      | 2017-08-16T19:49:57.779362Z 0 [Note] InnoDB: Using Linux native AIO
mysql_1      | 2017-08-16T19:49:57.780146Z 0 [Note] InnoDB: Number of pools: 1
mysql_1      | 2017-08-16T19:49:57.781173Z 0 [Note] InnoDB: Using CPU crc32 instructions
mysql_1      | 2017-08-16T19:49:57.783684Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
mysql_1      | 2017-08-16T19:49:57.795606Z 0 [Note] InnoDB: Completed initialization of buffer pool
mysql_1      | 2017-08-16T19:49:57.802525Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
mysql_1      | 2017-08-16T19:49:57.822520Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
kafka_1      | [2017-08-16 19:49:57,825] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka_1      | [2017-08-16 19:49:57,827] INFO [Transaction Coordinator 0]: Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
mysql_1      | 2017-08-16T19:49:57.856778Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mysql_1      | 2017-08-16T19:49:57.870591Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
zookeeper_1  | 2017-08-16 19:49:57,980 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@648] - Got user-level KeeperException when processing sessionid:0x15dec9850990000 type:delete cxid:0x3c zxid:0x19 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka_1      | [2017-08-16 19:49:57,990] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
mysql_1      | 2017-08-16T19:49:58.128213Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mysql_1      | 2017-08-16T19:49:58.131581Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
mysql_1      | 2017-08-16T19:49:58.134051Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
mysql_1      | 2017-08-16T19:49:58.135581Z 0 [Note] InnoDB: Waiting for purge to start
kafka_1      | [2017-08-16 19:49:58,170] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
zookeeper_1  | 2017-08-16 19:49:58,174 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@648] - Got user-level KeeperException when processing sessionid:0x15dec9850990000 type:create cxid:0x46 zxid:0x1a txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
zookeeper_1  | 2017-08-16 19:49:58,179 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@648] - Got user-level KeeperException when processing sessionid:0x15dec9850990000 type:create cxid:0x47 zxid:0x1b txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
mysql_1      | 2017-08-16T19:49:58.186610Z 0 [Note] InnoDB: 5.7.19 started; log sequence number 2539315
mysql_1      | 2017-08-16T19:49:58.187582Z 0 [Note] Plugin 'FEDERATED' is disabled.
mysql_1      | 2017-08-16T19:49:58.188215Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
kafka_1      | [2017-08-16 19:49:58,189] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
mysql_1      | 2017-08-16T19:49:58.205572Z 0 [Note] InnoDB: Buffer pool(s) load completed at 170816 19:49:58
kafka_1      | [2017-08-16 19:49:58,218] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(10.200.10.1,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
kafka_1      | [2017-08-16 19:49:58,222] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
mysql_1      | 2017-08-16T19:49:58.226447Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
mysql_1      | 2017-08-16T19:49:58.227351Z 0 [Warning] CA certificate ca.pem is self signed.
mysql_1      | 2017-08-16T19:49:58.238878Z 0 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:58.238935Z 0 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:58.238952Z 0 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:58.239106Z 0 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:58.239137Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:58.239588Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:58.244527Z 0 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:49:58.244589Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
kafka_1      | [2017-08-16 19:49:58,269] INFO Kafka version : 0.11.0.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1      | [2017-08-16 19:49:58,274] INFO Kafka commitId : cb8625948210849f (org.apache.kafka.common.utils.AppInfoParser)
kafka_1      | [2017-08-16 19:49:58,276] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
mysql_1      | 2017-08-16T19:49:58.284732Z 0 [Note] Event Scheduler: Loaded 0 events
mysql_1      | 2017-08-16T19:49:58.286254Z 0 [Note] mysqld: ready for connections.
mysql_1      | Version: '5.7.19-log'  socket: '/var/run/mysqld/mysqld.sock'  port: 0  MySQL Community Server (GPL)
mysql_1      | 2017-08-16T19:49:58.286397Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check. 
mysql_1      | 2017-08-16T19:49:58.286423Z 0 [Note] Beginning of list of non-natively partitioned tables
mysql_1      | 2017-08-16T19:49:58.323246Z 0 [Note] End of list of non-natively partitioned tables
mysql_1      | Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
mysql_1      | Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
mysql_1      | Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
mysql_1      | 2017-08-16T19:50:02.882831Z 5 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.882895Z 5 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.882919Z 5 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.882929Z 5 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.882945Z 5 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.883098Z 5 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.883274Z 5 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | mysql: [Warning] Using a password on the command line interface can be insecure.
mysql_1      | mysql: [Warning] Using a password on the command line interface can be insecure.
mysql_1      | 2017-08-16T19:50:02.900046Z 7 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.900108Z 7 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.900138Z 7 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.900148Z 7 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.900164Z 7 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.900617Z 7 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:02.900677Z 7 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | mysql: [Warning] Using a password on the command line interface can be insecure.
mysql_1      | 
mysql_1      | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/initialize-database.sql
mysql_1      | 
mysql_1      | 
mysql_1      | 2017-08-16T19:50:03.037609Z 0 [Note] Giving 1 client threads a chance to die gracefully
mysql_1      | 2017-08-16T19:50:03.037650Z 0 [Note] Shutting down slave threads
mysql_1      | 2017-08-16T19:50:05.040029Z 0 [Note] Forcefully disconnecting 0 remaining clients
mysql_1      | 2017-08-16T19:50:05.040448Z 0 [Note] Event Scheduler: Purging the queue. 0 events
mysql_1      | 2017-08-16T19:50:05.041237Z 0 [Note] Binlog end
mysql_1      | 2017-08-16T19:50:05.044877Z 0 [Note] Shutting down plugin 'ngram'
mysql_1      | 2017-08-16T19:50:05.044924Z 0 [Note] Shutting down plugin 'ARCHIVE'
mysql_1      | 2017-08-16T19:50:05.044934Z 0 [Note] Shutting down plugin 'partition'
mysql_1      | 2017-08-16T19:50:05.044943Z 0 [Note] Shutting down plugin 'BLACKHOLE'
mysql_1      | 2017-08-16T19:50:05.044952Z 0 [Note] Shutting down plugin 'MyISAM'
mysql_1      | 2017-08-16T19:50:05.044968Z 0 [Note] Shutting down plugin 'MRG_MYISAM'
mysql_1      | 2017-08-16T19:50:05.044978Z 0 [Note] Shutting down plugin 'INNODB_SYS_VIRTUAL'
mysql_1      | 2017-08-16T19:50:05.044986Z 0 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
mysql_1      | 2017-08-16T19:50:05.044994Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'
mysql_1      | 2017-08-16T19:50:05.045002Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
mysql_1      | 2017-08-16T19:50:05.045029Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
mysql_1      | 2017-08-16T19:50:05.045057Z 0 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
mysql_1      | 2017-08-16T19:50:05.045067Z 0 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
mysql_1      | 2017-08-16T19:50:05.045075Z 0 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
mysql_1      | 2017-08-16T19:50:05.045082Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
mysql_1      | 2017-08-16T19:50:05.045090Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
mysql_1      | 2017-08-16T19:50:05.045098Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
mysql_1      | 2017-08-16T19:50:05.045106Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
mysql_1      | 2017-08-16T19:50:05.045113Z 0 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
mysql_1      | 2017-08-16T19:50:05.045121Z 0 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
mysql_1      | 2017-08-16T19:50:05.045128Z 0 [Note] Shutting down plugin 'INNODB_FT_DELETED'
mysql_1      | 2017-08-16T19:50:05.045149Z 0 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
mysql_1      | 2017-08-16T19:50:05.045156Z 0 [Note] Shutting down plugin 'INNODB_METRICS'
mysql_1      | 2017-08-16T19:50:05.045165Z 0 [Note] Shutting down plugin 'INNODB_TEMP_TABLE_INFO'
mysql_1      | 2017-08-16T19:50:05.045178Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
mysql_1      | 2017-08-16T19:50:05.045209Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
mysql_1      | 2017-08-16T19:50:05.045229Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
mysql_1      | 2017-08-16T19:50:05.045242Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'
mysql_1      | 2017-08-16T19:50:05.045249Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
mysql_1      | 2017-08-16T19:50:05.045257Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
mysql_1      | 2017-08-16T19:50:05.045271Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM'
mysql_1      | 2017-08-16T19:50:05.045302Z 0 [Note] Shutting down plugin 'INNODB_CMP_RESET'
mysql_1      | 2017-08-16T19:50:05.045310Z 0 [Note] Shutting down plugin 'INNODB_CMP'
mysql_1      | 2017-08-16T19:50:05.045320Z 0 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
mysql_1      | 2017-08-16T19:50:05.045351Z 0 [Note] Shutting down plugin 'INNODB_LOCKS'
mysql_1      | 2017-08-16T19:50:05.045363Z 0 [Note] Shutting down plugin 'INNODB_TRX'
mysql_1      | 2017-08-16T19:50:05.045376Z 0 [Note] Shutting down plugin 'InnoDB'
mysql_1      | 2017-08-16T19:50:05.045482Z 0 [Note] InnoDB: FTS optimize thread exiting.
mysql_1      | 2017-08-16T19:50:05.045737Z 0 [Note] InnoDB: Starting shutdown...
mysql_1      | 2017-08-16T19:50:05.148232Z 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
mysql_1      | 2017-08-16T19:50:05.148863Z 0 [Note] InnoDB: Buffer pool(s) dump completed at 170816 19:50:05
mysql_1      | 2017-08-16T19:50:06.796124Z 0 [Note] InnoDB: Shutdown completed; log sequence number 12168140
mysql_1      | 2017-08-16T19:50:06.801532Z 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
mysql_1      | 2017-08-16T19:50:06.801593Z 0 [Note] Shutting down plugin 'CSV'
mysql_1      | 2017-08-16T19:50:06.801615Z 0 [Note] Shutting down plugin 'MEMORY'
mysql_1      | 2017-08-16T19:50:06.801628Z 0 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
mysql_1      | 2017-08-16T19:50:06.801660Z 0 [Note] Shutting down plugin 'sha256_password'
mysql_1      | 2017-08-16T19:50:06.801689Z 0 [Note] Shutting down plugin 'mysql_native_password'
mysql_1      | 2017-08-16T19:50:06.801881Z 0 [Note] Shutting down plugin 'binlog'
mysql_1      | 2017-08-16T19:50:06.805114Z 0 [Note] mysqld: Shutdown complete
mysql_1      | 
mysql_1      | 
mysql_1      | MySQL init process done. Ready for start up.
mysql_1      | 
mysql_1      | 2017-08-16T19:50:07.089995Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
mysql_1      | 2017-08-16T19:50:07.091185Z 0 [Note] mysqld (mysqld 5.7.19-log) starting as process 1 ...
mysql_1      | 2017-08-16T19:50:07.096592Z 0 [Note] InnoDB: PUNCH HOLE support available
mysql_1      | 2017-08-16T19:50:07.096702Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
mysql_1      | 2017-08-16T19:50:07.096755Z 0 [Note] InnoDB: Uses event mutexes
mysql_1      | 2017-08-16T19:50:07.096789Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
mysql_1      | 2017-08-16T19:50:07.096963Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
mysql_1      | 2017-08-16T19:50:07.096993Z 0 [Note] InnoDB: Using Linux native AIO
mysql_1      | 2017-08-16T19:50:07.097937Z 0 [Note] InnoDB: Number of pools: 1
mysql_1      | 2017-08-16T19:50:07.098450Z 0 [Note] InnoDB: Using CPU crc32 instructions
mysql_1      | 2017-08-16T19:50:07.101277Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
mysql_1      | 2017-08-16T19:50:07.109177Z 0 [Note] InnoDB: Completed initialization of buffer pool
mysql_1      | 2017-08-16T19:50:07.111710Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
mysql_1      | 2017-08-16T19:50:07.124690Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
mysql_1      | 2017-08-16T19:50:07.137837Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mysql_1      | 2017-08-16T19:50:07.137929Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mysql_1      | 2017-08-16T19:50:07.395464Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mysql_1      | 2017-08-16T19:50:07.398465Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
mysql_1      | 2017-08-16T19:50:07.399278Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
mysql_1      | 2017-08-16T19:50:07.401250Z 0 [Note] InnoDB: 5.7.19 started; log sequence number 12168140
mysql_1      | 2017-08-16T19:50:07.402173Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
mysql_1      | 2017-08-16T19:50:07.403713Z 0 [Note] Plugin 'FEDERATED' is disabled.
mysql_1      | 2017-08-16T19:50:07.407731Z 0 [Note] InnoDB: Buffer pool(s) load completed at 170816 19:50:07
mysql_1      | 2017-08-16T19:50:07.426039Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
mysql_1      | 2017-08-16T19:50:07.426954Z 0 [Warning] CA certificate ca.pem is self signed.
mysql_1      | 2017-08-16T19:50:07.430328Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
mysql_1      | 2017-08-16T19:50:07.430730Z 0 [Note] IPv6 is available.
mysql_1      | 2017-08-16T19:50:07.430882Z 0 [Note]   - '::' resolves to '::';
mysql_1      | 2017-08-16T19:50:07.431446Z 0 [Note] Server socket created on IP: '::'.
mysql_1      | 2017-08-16T19:50:07.435668Z 0 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:07.435833Z 0 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:07.436149Z 0 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:07.436261Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:07.436470Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:07.439200Z 0 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:07.439355Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1      | 2017-08-16T19:50:07.448044Z 0 [Note] Event Scheduler: Loaded 0 events
mysql_1      | 2017-08-16T19:50:07.448661Z 0 [Note] mysqld: ready for connections.
mysql_1      | Version: '5.7.19-log'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)
mysql_1      | 2017-08-16T19:50:07.448756Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check. 
mysql_1      | 2017-08-16T19:50:07.448986Z 0 [Note] Beginning of list of non-natively partitioned tables
mysql_1      | 2017-08-16T19:50:07.461712Z 0 [Note] End of list of non-natively partitioned tables
stevehu commented 7 years ago

It is working as I see this line.

"ADVERTISED_HOST_NAME=10.200.10.1"
notesby commented 7 years ago

I don't have much experience with docker or microservices, I am maybe doing something wrong but how can I get more information about what is happening?

stevehu commented 7 years ago

Given the above inspect result, this eventuate compose is working. The next step is to start cdcserver compose by following the tutorial. Let me know if you encounter other issues.

Docker is just give you an easier way to start multiple services together; however, it is not crucial to use docker at all. You can start everything on your host system and it might be easier for people who is still learning docker. In a long run, especially when deploying to production, Docker might be very necessary.

notesby commented 7 years ago

Thank you, this is what I get when started the cscserver with docker

$ docker-compose -f docker-compose-cdcserver.yml up
WARNING: Found orphan containers (lightdocker_kafka_1, lightdocker_mysql_1, lightdocker_zookeeper_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Creating lightdocker_cdcserver_1 ... 
Creating lightdocker_cdcserver_1 ... done
Attaching to lightdocker_cdcserver_1
cdcserver_1  | 21:32:20,622 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [/config/logback.xml]
cdcserver_1  | 21:32:20,622 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
cdcserver_1  | 21:32:20,622 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
cdcserver_1  | 21:32:20,622 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [jar:file:/server.jar!/logback.xml]
cdcserver_1  | 21:32:20,652 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@75412c2f - URL [jar:file:/server.jar!/logback.xml] is not of type file
cdcserver_1  | 21:32:20,769 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
cdcserver_1  | 21:32:20,808 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
cdcserver_1  | 21:32:20,814 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [stdout]
cdcserver_1  | 21:32:20,815 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
cdcserver_1  | 21:32:20,909 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.FileAppender]
cdcserver_1  | 21:32:20,915 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [log]
cdcserver_1  | 21:32:20,920 |-WARN in ch.qos.logback.core.FileAppender[log] - This appender no longer admits a layout as a sub-component, set an encoder instead.
cdcserver_1  | 21:32:20,920 |-WARN in ch.qos.logback.core.FileAppender[log] - To ensure compatibility, wrapping your layout in LayoutWrappingEncoder.
cdcserver_1  | 21:32:20,920 |-WARN in ch.qos.logback.core.FileAppender[log] - See also http://logback.qos.ch/codes.html#layoutInsteadOfEncoder for details
cdcserver_1  | 21:32:20,920 |-INFO in ch.qos.logback.core.FileAppender[log] - File property is set to [target/test.log]
cdcserver_1  | 21:32:20,923 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
cdcserver_1  | 21:32:20,924 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [audit]
cdcserver_1  | 21:32:20,926 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
cdcserver_1  | 21:32:20,926 |-WARN in ch.qos.logback.classic.encoder.PatternLayoutEncoder@282ba1e - As of version 1.2.0 "immediateFlush" property should be set within the enclosing Appender.
cdcserver_1  | 21:32:20,926 |-WARN in ch.qos.logback.classic.encoder.PatternLayoutEncoder@282ba1e - Please move "immediateFlush" property into the enclosing appender.
cdcserver_1  | 21:32:20,927 |-WARN in ch.qos.logback.classic.encoder.PatternLayoutEncoder@282ba1e - Setting the "immediateFlush" property of the enclosing appender to true
cdcserver_1  | 21:32:20,940 |-INFO in ch.qos.logback.core.rolling.FixedWindowRollingPolicy@13b6d03 - Will use zip compression
cdcserver_1  | 21:32:20,946 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[audit] - Active log file name: target/audit.log
cdcserver_1  | 21:32:20,946 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[audit] - File property is set to [target/audit.log]
cdcserver_1  | 21:32:20,946 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to DEBUG
cdcserver_1  | 21:32:20,947 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [stdout] to Logger[ROOT]
cdcserver_1  | 21:32:20,948 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.networknt] to DEBUG
cdcserver_1  | 21:32:20,948 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [log] to Logger[com.networknt]
cdcserver_1  | 21:32:20,948 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [Audit] to ERROR
cdcserver_1  | 21:32:20,948 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting additivity of logger [Audit] to false
cdcserver_1  | 21:32:20,948 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [audit] to Logger[Audit]
cdcserver_1  | 21:32:20,948 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
cdcserver_1  | 21:32:20,950 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@f5f2bb7 - Registering current configuration as safe fallback point
cdcserver_1  | 
cdcserver_1  | 21:32:21.353 [main]       INFO  com.networknt.config.Config - daily config cache refresh
cdcserver_1  | 21:32:21.376 [main]       INFO  com.networknt.config.Config - Unable to load config from externalized folder for server.yml in /config
cdcserver_1  | 21:32:21.376 [main]       INFO  com.networknt.config.Config - Trying to load config from classpath directory for file server.yml
cdcserver_1  | 21:32:21.379 [main]       INFO  com.networknt.config.Config - Config loaded from default folder for server.yml
cdcserver_1  | 21:32:21.462 [main]       INFO  com.networknt.config.Config - Unable to load config from externalized folder for secret.yml in /config
cdcserver_1  | 21:32:21.462 [main]       INFO  com.networknt.config.Config - Trying to load config from classpath directory for file secret.yml
cdcserver_1  | 21:32:21.463 [main]       INFO  com.networknt.config.Config - Config loaded from default folder for secret.yml
cdcserver_1  | 21:32:21.469 [main]       INFO  com.networknt.server.Server - server starts
cdcserver_1  | 21:32:21.470 [main]       WARN  com.networknt.server.Server - Warning! No light-env has been passed in from command line. Default to dev
cdcserver_1  | 21:32:21.471 [main]       INFO  com.networknt.server.Server - light-config-server-uri is missing in the command line. Use local config files
cdcserver_1  | 21:32:21.480 [main]       INFO  com.networknt.config.Config - Config loaded from externalized folder for cdc.yml in /config
cdcserver_1  | 21:32:21.564 [main]       INFO  com.networknt.config.Config - Config loaded from externalized folder for kafka.yml in /config
cdcserver_1  | 21:32:21.747 [main]       INFO  o.a.c.f.imps.CuratorFrameworkImpl - Starting
cdcserver_1  | 21:32:21.751 [main]       DEBUG o.a.curator.CuratorZookeeperClient - Starting
cdcserver_1  | 21:32:21.751 [main]       DEBUG org.apache.curator.ConnectionState - Starting
cdcserver_1  | 21:32:21.752 [main]       DEBUG org.apache.curator.ConnectionState - reset
cdcserver_1  | 21:32:21.763 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.2-alpha-1750793, built on 06/30/2016 13:15 GMT
cdcserver_1  | 21:32:21.763 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:host.name=774406ae9158
cdcserver_1  | 21:32:21.763 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_131
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/server.jar
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.9.36-moby
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
cdcserver_1  | 21:32:21.764 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
cdcserver_1  | 21:32:21.765 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=19MB
cdcserver_1  | 21:32:21.765 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=444MB
cdcserver_1  | 21:32:21.765 [main]       INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=31MB
cdcserver_1  | 21:32:21.771 [main]       INFO  org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@661972b0
cdcserver_1  | 21:32:21.788 [main]       DEBUG o.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer is 4194304
cdcserver_1  | 21:32:21.809 [main-SendThread(zookeeper:2181)]       INFO  org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
cdcserver_1  | 21:32:21.829 [main]       INFO  o.a.c.f.imps.CuratorFrameworkImpl - Default schema
cdcserver_1  | 21:32:21.900 [main]       INFO  o.a.k.c.producer.ProducerConfig - ProducerConfig values: 
cdcserver_1  |  acks = all
cdcserver_1  |  batch.size = 16384
cdcserver_1  |  block.on.buffer.full = false
cdcserver_1  |  bootstrap.servers = [kafka:9092]
cdcserver_1  |  buffer.memory = 33554432
cdcserver_1  |  client.id = 
cdcserver_1  |  compression.type = none
cdcserver_1  |  connections.max.idle.ms = 540000
cdcserver_1  |  interceptor.classes = null
cdcserver_1  |  key.serializer = class org.apache.kafka.common.serialization.StringSerializer
cdcserver_1  |  linger.ms = 1
cdcserver_1  |  max.block.ms = 60000
cdcserver_1  |  max.in.flight.requests.per.connection = 5
cdcserver_1  |  max.request.size = 1048576
cdcserver_1  |  metadata.fetch.timeout.ms = 60000
cdcserver_1  |  metadata.max.age.ms = 300000
cdcserver_1  |  metric.reporters = []
cdcserver_1  |  metrics.num.samples = 2
cdcserver_1  |  metrics.sample.window.ms = 30000
cdcserver_1  |  partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
cdcserver_1  |  receive.buffer.bytes = 32768
cdcserver_1  |  reconnect.backoff.ms = 50
cdcserver_1  |  request.timeout.ms = 30000
cdcserver_1  |  retries = 0
cdcserver_1  |  retry.backoff.ms = 100
cdcserver_1  |  sasl.kerberos.kinit.cmd = /usr/bin/kinit
cdcserver_1  |  sasl.kerberos.min.time.before.relogin = 60000
cdcserver_1  |  sasl.kerberos.service.name = null
cdcserver_1  |  sasl.kerberos.ticket.renew.jitter = 0.05
cdcserver_1  |  sasl.kerberos.ticket.renew.window.factor = 0.8
cdcserver_1  |  sasl.mechanism = GSSAPI
cdcserver_1  |  security.protocol = PLAINTEXT
cdcserver_1  |  send.buffer.bytes = 131072
cdcserver_1  |  ssl.cipher.suites = null
cdcserver_1  |  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
cdcserver_1  |  ssl.endpoint.identification.algorithm = null
cdcserver_1  |  ssl.key.password = null
cdcserver_1  |  ssl.keymanager.algorithm = SunX509
cdcserver_1  |  ssl.keystore.location = null
cdcserver_1  |  ssl.keystore.password = null
cdcserver_1  |  ssl.keystore.type = JKS
cdcserver_1  |  ssl.protocol = TLS
cdcserver_1  |  ssl.provider = null
cdcserver_1  |  ssl.secure.random.implementation = null
cdcserver_1  |  ssl.trustmanager.algorithm = PKIX
cdcserver_1  |  ssl.truststore.location = null
cdcserver_1  |  ssl.truststore.password = null
cdcserver_1  |  ssl.truststore.type = JKS
cdcserver_1  |  timeout.ms = 30000
cdcserver_1  |  value.serializer = class org.apache.kafka.common.serialization.StringSerializer
cdcserver_1  | 
cdcserver_1  | 21:32:22.022 [main]       INFO  o.a.k.c.producer.ProducerConfig - ProducerConfig values: 
cdcserver_1  |  acks = all
cdcserver_1  |  batch.size = 16384
cdcserver_1  |  block.on.buffer.full = false
cdcserver_1  |  bootstrap.servers = [kafka:9092]
cdcserver_1  |  buffer.memory = 33554432
cdcserver_1  |  client.id = producer-1
cdcserver_1  |  compression.type = none
cdcserver_1  |  connections.max.idle.ms = 540000
cdcserver_1  |  interceptor.classes = null
cdcserver_1  |  key.serializer = class org.apache.kafka.common.serialization.StringSerializer
cdcserver_1  |  linger.ms = 1
cdcserver_1  |  max.block.ms = 60000
cdcserver_1  |  max.in.flight.requests.per.connection = 5
cdcserver_1  |  max.request.size = 1048576
cdcserver_1  |  metadata.fetch.timeout.ms = 60000
cdcserver_1  |  metadata.max.age.ms = 300000
cdcserver_1  |  metric.reporters = []
cdcserver_1  |  metrics.num.samples = 2
cdcserver_1  |  metrics.sample.window.ms = 30000
cdcserver_1  |  partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
cdcserver_1  |  receive.buffer.bytes = 32768
cdcserver_1  |  reconnect.backoff.ms = 50
cdcserver_1  |  request.timeout.ms = 30000
cdcserver_1  |  retries = 0
cdcserver_1  |  retry.backoff.ms = 100
cdcserver_1  |  sasl.kerberos.kinit.cmd = /usr/bin/kinit
cdcserver_1  |  sasl.kerberos.min.time.before.relogin = 60000
cdcserver_1  |  sasl.kerberos.service.name = null
cdcserver_1  |  sasl.kerberos.ticket.renew.jitter = 0.05
cdcserver_1  |  sasl.kerberos.ticket.renew.window.factor = 0.8
cdcserver_1  |  sasl.mechanism = GSSAPI
cdcserver_1  |  security.protocol = PLAINTEXT
cdcserver_1  |  send.buffer.bytes = 131072
cdcserver_1  |  ssl.cipher.suites = null
cdcserver_1  |  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
cdcserver_1  |  ssl.endpoint.identification.algorithm = null
cdcserver_1  |  ssl.key.password = null
cdcserver_1  |  ssl.keymanager.algorithm = SunX509
cdcserver_1  |  ssl.keystore.location = null
cdcserver_1  |  ssl.keystore.password = null
cdcserver_1  |  ssl.keystore.type = JKS
cdcserver_1  |  ssl.protocol = TLS
cdcserver_1  |  ssl.provider = null
cdcserver_1  |  ssl.secure.random.implementation = null
cdcserver_1  |  ssl.trustmanager.algorithm = PKIX
cdcserver_1  |  ssl.truststore.location = null
cdcserver_1  |  ssl.truststore.password = null
cdcserver_1  |  ssl.truststore.type = JKS
cdcserver_1  |  timeout.ms = 30000
cdcserver_1  |  value.serializer = class org.apache.kafka.common.serialization.StringSerializer
cdcserver_1  | 
cdcserver_1  | 21:32:22.035 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time
cdcserver_1  | 21:32:22.041 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records
cdcserver_1  | 21:32:22.045 [main]       DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [kafka:9092 (id: -1 rack: null)], partitions = [])
cdcserver_1  | 21:32:22.051 [main-SendThread(zookeeper:2181)]       INFO  org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.18.0.5:60814, server: zookeeper/172.18.0.2:2181
cdcserver_1  | 21:32:22.054 [main-SendThread(zookeeper:2181)]       DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on zookeeper/172.18.0.2:2181
cdcserver_1  | 21:32:22.064 [main-SendThread(zookeeper:2181)]       INFO  org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.18.0.2:2181, sessionid = 0x15dec9850990002, negotiated timeout = 40000
cdcserver_1  | 21:32:22.067 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name connections-closed:
cdcserver_1  | 21:32:22.067 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name connections-created:
cdcserver_1  | 21:32:22.067 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:
cdcserver_1  | 21:32:22.068 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:
cdcserver_1  | 21:32:22.072 [main-EventThread]       DEBUG org.apache.curator.ConnectionState - Negotiated session timeout: 40000
cdcserver_1  | 21:32:22.074 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-received:
cdcserver_1  | 21:32:22.077 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name select-time:
cdcserver_1  | 21:32:22.080 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name io-time:
cdcserver_1  | 21:32:22.088 [main-EventThread]       INFO  o.a.c.f.state.ConnectionStateManager - State change: CONNECTED
cdcserver_1  | 21:32:22.100 [main-SendThread(zookeeper:2181)]       DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15dec9850990002, packet:: clientPath:/zookeeper/config serverPath:/zookeeper/config finished:false header:: 1,4  replyHeader:: 1,195,-101  request:: '/zookeeper/config,T  response::  
cdcserver_1  | 21:32:22.092 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name batch-size
cdcserver_1  | 21:32:22.104 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name compression-rate
cdcserver_1  | 21:32:22.105 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name queue-time
cdcserver_1  | 21:32:22.106 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name request-time
cdcserver_1  | 21:32:22.106 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time
cdcserver_1  | 21:32:22.107 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name records-per-request
cdcserver_1  | 21:32:22.108 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name record-retries
cdcserver_1  | 21:32:22.109 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name errors
cdcserver_1  | 21:32:22.109 [main]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name record-size-max
cdcserver_1  | 21:32:22.117 [kafka-producer-network-thread | producer-1]       DEBUG o.a.k.c.producer.internals.Sender - Starting Kafka producer I/O thread.
cdcserver_1  | 21:32:22.121 [main]       INFO  o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
cdcserver_1  | 21:32:22.122 [main]       INFO  o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
cdcserver_1  | 21:32:22.124 [main]       DEBUG o.a.k.clients.producer.KafkaProducer - Kafka producer started
cdcserver_1  | 21:32:22.166 [main]       INFO  c.n.e.c.m.EventTableChangesToAggregateTopicTranslator - CDC initialized. Ready to become leader
cdcserver_1  | CdcServerStartupHookProvider is called
cdcserver_1  | 21:32:22.180 [main]       DEBUG com.networknt.server.Server - found middlewareLoaders
cdcserver_1  | 21:32:22.189 [main]       INFO  com.networknt.config.Config - Unable to load config from externalized folder for tls/server.keystore in /config
cdcserver_1  | 21:32:22.190 [main]       INFO  com.networknt.config.Config - Trying to load config from classpath directory for file tls/server.keystore
cdcserver_1  | 21:32:22.191 [main]       INFO  com.networknt.config.Config - Config loaded from default folder for tls/server.keystore
cdcserver_1  | 21:32:22.206 [main-SendThread(zookeeper:2181)]       DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15dec9850990002, packet:: clientPath:null serverPath:null finished:false header:: 2,15  replyHeader:: 2,-1,-6  request:: '/eventuatelocal/cdc/leader/_c_13e39942-d07b-4ede-a7cb-950c3a86bd7f-lock-,#3137322e31382e302e35,v{s{31,s{'world,'anyone}}},3  response::  
cdcserver_1  | 21:32:22.207 [main-SendThread(zookeeper:2181)]       INFO  org.apache.zookeeper.ClientCnxn - Unable to read additional data from server sessionid 0x15dec9850990002, likely server has closed socket, closing socket connection and attempting reconnect
cdcserver_1  | 21:32:22.309 [main-EventThread]       INFO  o.a.c.f.state.ConnectionStateManager - State change: SUSPENDED
cdcserver_1  | 21:32:22.310 [Curator-ConnectionStateManager-0]       DEBUG c.n.e.c.m.EventTableChangesToAggregateTopicTranslator - StateChanged: SUSPENDED
cdcserver_1  | 21:32:22.314 [Curator-ConnectionStateManager-0]       INFO  c.n.e.c.m.EventTableChangesToAggregateTopicTranslator - Resigning leadership
cdcserver_1  | 21:32:22.315 [Curator-ConnectionStateManager-0]       DEBUG c.n.e.c.m.EventTableChangesToAggregateTopicTranslator - Stopping to capture changes
cdcserver_1  | 21:32:22.317 [Curator-ConnectionStateManager-0]       DEBUG c.n.e.c.mysql.MySQLCdcKafkaPublisher - Stopping kafka producer
cdcserver_1  | 21:32:22.508 [main]       DEBUG org.jboss.logging - Logging Provider: org.jboss.logging.Slf4jLoggerProvider found via system property
cdcserver_1  | 21:32:22.512 [main]       DEBUG io.undertow - starting undertow server io.undertow.Undertow@6253c26
cdcserver_1  | 21:32:22.525 [main]       INFO  org.xnio - XNIO version 3.3.6.Final
cdcserver_1  | 21:32:22.538 [main]       INFO  org.xnio.nio - XNIO NIO Implementation Version 3.3.6.Final
cdcserver_1  | 21:32:22.584 [XNIO-1 I/O-1]       DEBUG org.xnio.nio - Started channel thread 'XNIO-1 I/O-1', selector sun.nio.ch.EPollSelectorImpl@4fbd6cb4
cdcserver_1  | 21:32:22.585 [XNIO-1 I/O-2]       DEBUG org.xnio.nio - Started channel thread 'XNIO-1 I/O-2', selector sun.nio.ch.EPollSelectorImpl@624f3994
cdcserver_1  | 21:32:22.587 [XNIO-1 I/O-3]       DEBUG org.xnio.nio - Started channel thread 'XNIO-1 I/O-3', selector sun.nio.ch.EPollSelectorImpl@4e3ca8f
cdcserver_1  | 21:32:22.588 [XNIO-1 I/O-4]       DEBUG org.xnio.nio - Started channel thread 'XNIO-1 I/O-4', selector sun.nio.ch.EPollSelectorImpl@690334c9
cdcserver_1  | 21:32:22.593 [main]       DEBUG io.undertow - Configuring listener with protocol HTTP for interface 0.0.0.0 and port 8080
cdcserver_1  | 21:32:22.595 [XNIO-1 Accept]       DEBUG org.xnio.nio - Started channel thread 'XNIO-1 Accept', selector sun.nio.ch.EPollSelectorImpl@33f52187
cdcserver_1  | 21:32:22.657 [main]       DEBUG io.undertow - Configuring listener with protocol HTTPS for interface 0.0.0.0 and port 8443
cdcserver_1  | 21:32:22.668 [main]       INFO  com.networknt.server.Server - Http Server started on ip:0.0.0.0 Port:8080
cdcserver_1  | 21:32:22.669 [main]       INFO  com.networknt.server.Server - Https Server started on ip:0.0.0.0 Port:8443
cdcserver_1  | 21:32:23.962 [main-SendThread(zookeeper:2181)]       INFO  org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
cdcserver_1  | 21:32:23.965 [main-SendThread(zookeeper:2181)]       INFO  org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.18.0.5:60816, server: zookeeper/172.18.0.2:2181
cdcserver_1  | 21:32:23.966 [main-SendThread(zookeeper:2181)]       DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on zookeeper/172.18.0.2:2181
cdcserver_1  | 21:32:23.972 [main-SendThread(zookeeper:2181)]       INFO  org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.18.0.2:2181, sessionid = 0x15dec9850990002, negotiated timeout = 40000
cdcserver_1  | 21:32:23.973 [main-EventThread]       DEBUG org.apache.curator.ConnectionState - Negotiated session timeout: 40000
cdcserver_1  | 21:32:23.973 [main-EventThread]       INFO  o.a.c.f.state.ConnectionStateManager - State change: RECONNECTED
cdcserver_1  | 21:32:23.975 [Curator-ConnectionStateManager-0]       DEBUG c.n.e.c.m.EventTableChangesToAggregateTopicTranslator - StateChanged: RECONNECTED
cdcserver_1  | 21:32:23.977 [main-SendThread(zookeeper:2181)]       DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15dec9850990002, packet:: clientPath:/zookeeper/config serverPath:/zookeeper/config finished:false header:: 3,4  replyHeader:: 3,195,-101  request:: '/zookeeper/config,T  response::  
cdcserver_1  | 21:32:23.976 [Curator-ConnectionStateManager-0]       INFO  c.n.e.c.m.EventTableChangesToAggregateTopicTranslator - Taking leadership
cdcserver_1  | 21:32:23.978 [Curator-ConnectionStateManager-0]       DEBUG c.n.e.c.m.EventTableChangesToAggregateTopicTranslator - Starting to capture changes
cdcserver_1  | 21:32:23.979 [Curator-ConnectionStateManager-0]       DEBUG c.n.e.c.mysql.MySQLCdcKafkaPublisher - Starting MySQLCdcKafkaPublisher
cdcserver_1  | 21:32:23.981 [Curator-ConnectionStateManager-0]       INFO  o.a.k.c.producer.ProducerConfig - ProducerConfig values: 
cdcserver_1  |  acks = all
cdcserver_1  |  batch.size = 16384
cdcserver_1  |  block.on.buffer.full = false
cdcserver_1  |  bootstrap.servers = [kafka:9092]
cdcserver_1  |  buffer.memory = 33554432
cdcserver_1  |  client.id = 
cdcserver_1  |  compression.type = none
cdcserver_1  |  connections.max.idle.ms = 540000
cdcserver_1  |  interceptor.classes = null
cdcserver_1  |  key.serializer = class org.apache.kafka.common.serialization.StringSerializer
cdcserver_1  |  linger.ms = 1
cdcserver_1  |  max.block.ms = 60000
cdcserver_1  |  max.in.flight.requests.per.connection = 5
cdcserver_1  |  max.request.size = 1048576
cdcserver_1  |  metadata.fetch.timeout.ms = 60000
cdcserver_1  |  metadata.max.age.ms = 300000
cdcserver_1  |  metric.reporters = []
cdcserver_1  |  metrics.num.samples = 2
cdcserver_1  |  metrics.sample.window.ms = 30000
cdcserver_1  |  partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
cdcserver_1  |  receive.buffer.bytes = 32768
cdcserver_1  |  reconnect.backoff.ms = 50
cdcserver_1  |  request.timeout.ms = 30000
cdcserver_1  |  retries = 0
cdcserver_1  |  retry.backoff.ms = 100
cdcserver_1  |  sasl.kerberos.kinit.cmd = /usr/bin/kinit
cdcserver_1  |  sasl.kerberos.min.time.before.relogin = 60000
cdcserver_1  |  sasl.kerberos.service.name = null
cdcserver_1  |  sasl.kerberos.ticket.renew.jitter = 0.05
cdcserver_1  |  sasl.kerberos.ticket.renew.window.factor = 0.8
cdcserver_1  |  sasl.mechanism = GSSAPI
cdcserver_1  |  security.protocol = PLAINTEXT
cdcserver_1  |  send.buffer.bytes = 131072
cdcserver_1  |  ssl.cipher.suites = null
cdcserver_1  |  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
cdcserver_1  |  ssl.endpoint.identification.algorithm = null
cdcserver_1  |  ssl.key.password = null
cdcserver_1  |  ssl.keymanager.algorithm = SunX509
cdcserver_1  |  ssl.keystore.location = null
cdcserver_1  |  ssl.keystore.password = null
cdcserver_1  |  ssl.keystore.type = JKS
cdcserver_1  |  ssl.protocol = TLS
cdcserver_1  |  ssl.provider = null
cdcserver_1  |  ssl.secure.random.implementation = null
cdcserver_1  |  ssl.trustmanager.algorithm = PKIX
cdcserver_1  |  ssl.truststore.location = null
cdcserver_1  |  ssl.truststore.password = null
cdcserver_1  |  ssl.truststore.type = JKS
cdcserver_1  |  timeout.ms = 30000
cdcserver_1  |  value.serializer = class org.apache.kafka.common.serialization.StringSerializer
cdcserver_1  | 
cdcserver_1  | 21:32:23.983 [Curator-ConnectionStateManager-0]       INFO  o.a.k.c.producer.ProducerConfig - ProducerConfig values: 
cdcserver_1  |  acks = all
cdcserver_1  |  batch.size = 16384
cdcserver_1  |  block.on.buffer.full = false
cdcserver_1  |  bootstrap.servers = [kafka:9092]
cdcserver_1  |  buffer.memory = 33554432
cdcserver_1  |  client.id = producer-2
cdcserver_1  |  compression.type = none
cdcserver_1  |  connections.max.idle.ms = 540000
cdcserver_1  |  interceptor.classes = null
cdcserver_1  |  key.serializer = class org.apache.kafka.common.serialization.StringSerializer
cdcserver_1  |  linger.ms = 1
cdcserver_1  |  max.block.ms = 60000
cdcserver_1  |  max.in.flight.requests.per.connection = 5
cdcserver_1  |  max.request.size = 1048576
cdcserver_1  |  metadata.fetch.timeout.ms = 60000
cdcserver_1  |  metadata.max.age.ms = 300000
cdcserver_1  |  metric.reporters = []
cdcserver_1  |  metrics.num.samples = 2
cdcserver_1  |  metrics.sample.window.ms = 30000
cdcserver_1  |  partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
cdcserver_1  |  receive.buffer.bytes = 32768
cdcserver_1  |  reconnect.backoff.ms = 50
cdcserver_1  |  request.timeout.ms = 30000
cdcserver_1  |  retries = 0
cdcserver_1  |  retry.backoff.ms = 100
cdcserver_1  |  sasl.kerberos.kinit.cmd = /usr/bin/kinit
cdcserver_1  |  sasl.kerberos.min.time.before.relogin = 60000
cdcserver_1  |  sasl.kerberos.service.name = null
cdcserver_1  |  sasl.kerberos.ticket.renew.jitter = 0.05
cdcserver_1  |  sasl.kerberos.ticket.renew.window.factor = 0.8
cdcserver_1  |  sasl.mechanism = GSSAPI
cdcserver_1  |  security.protocol = PLAINTEXT
cdcserver_1  |  send.buffer.bytes = 131072
cdcserver_1  |  ssl.cipher.suites = null
cdcserver_1  |  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
cdcserver_1  |  ssl.endpoint.identification.algorithm = null
cdcserver_1  |  ssl.key.password = null
cdcserver_1  |  ssl.keymanager.algorithm = SunX509
cdcserver_1  |  ssl.keystore.location = null
cdcserver_1  |  ssl.keystore.password = null
cdcserver_1  |  ssl.keystore.type = JKS
cdcserver_1  |  ssl.protocol = TLS
cdcserver_1  |  ssl.provider = null
cdcserver_1  |  ssl.secure.random.implementation = null
cdcserver_1  |  ssl.trustmanager.algorithm = PKIX
cdcserver_1  |  ssl.truststore.location = null
cdcserver_1  |  ssl.truststore.password = null
cdcserver_1  |  ssl.truststore.type = JKS
cdcserver_1  |  timeout.ms = 30000
cdcserver_1  |  value.serializer = class org.apache.kafka.common.serialization.StringSerializer
cdcserver_1  | 
cdcserver_1  | 21:32:23.985 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time
cdcserver_1  | 21:32:23.986 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records
cdcserver_1  | 21:32:23.986 [Curator-ConnectionStateManager-0]       DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [kafka:9092 (id: -1 rack: null)], partitions = [])
cdcserver_1  | 21:32:23.987 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name connections-closed:
cdcserver_1  | 21:32:23.988 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name connections-created:
cdcserver_1  | 21:32:23.988 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:
cdcserver_1  | 21:32:23.989 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:
cdcserver_1  | 21:32:23.990 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-received:
cdcserver_1  | 21:32:23.991 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name select-time:
cdcserver_1  | 21:32:23.991 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name io-time:
cdcserver_1  | 21:32:23.992 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name batch-size
cdcserver_1  | 21:32:23.993 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name compression-rate
cdcserver_1  | 21:32:23.993 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name queue-time
cdcserver_1  | 21:32:23.995 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name request-time
cdcserver_1  | 21:32:23.995 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time
cdcserver_1  | 21:32:23.996 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name records-per-request
cdcserver_1  | 21:32:23.997 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name record-retries
cdcserver_1  | 21:32:23.998 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name errors
cdcserver_1  | 21:32:23.999 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name record-size-max
cdcserver_1  | 21:32:24.001 [Curator-ConnectionStateManager-0]       INFO  o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
cdcserver_1  | 21:32:24.001 [Curator-ConnectionStateManager-0]       INFO  o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
cdcserver_1  | 21:32:24.001 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.clients.producer.KafkaProducer - Kafka producer started
cdcserver_1  | 21:32:24.002 [Curator-ConnectionStateManager-0]       DEBUG c.n.e.c.mysql.MySQLCdcKafkaPublisher - Starting MySQLCdcKafkaPublisher
cdcserver_1  | 21:32:24.011 [kafka-producer-network-thread | producer-2]       DEBUG o.a.k.c.producer.internals.Sender - Starting Kafka producer I/O thread.
cdcserver_1  | 21:32:24.018 [Curator-ConnectionStateManager-0]       INFO  o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values: 
cdcserver_1  |  auto.commit.interval.ms = 1000
cdcserver_1  |  auto.offset.reset = earliest
cdcserver_1  |  bootstrap.servers = [kafka:9092]
cdcserver_1  |  check.crcs = true
cdcserver_1  |  client.id = 
cdcserver_1  |  connections.max.idle.ms = 540000
cdcserver_1  |  enable.auto.commit = false
cdcserver_1  |  exclude.internal.topics = true
cdcserver_1  |  fetch.max.bytes = 52428800
cdcserver_1  |  fetch.max.wait.ms = 500
cdcserver_1  |  fetch.min.bytes = 1
cdcserver_1  |  group.id = 88c3f9ed-2063-4740-8a87-53ab288411d7
cdcserver_1  |  heartbeat.interval.ms = 3000
cdcserver_1  |  interceptor.classes = null
cdcserver_1  |  key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
cdcserver_1  |  max.partition.fetch.bytes = 1048576
cdcserver_1  |  max.poll.interval.ms = 300000
cdcserver_1  |  max.poll.records = 500
cdcserver_1  |  metadata.max.age.ms = 300000
cdcserver_1  |  metric.reporters = []
cdcserver_1  |  metrics.num.samples = 2
cdcserver_1  |  metrics.sample.window.ms = 30000
cdcserver_1  |  partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
cdcserver_1  |  receive.buffer.bytes = 65536
cdcserver_1  |  reconnect.backoff.ms = 50
cdcserver_1  |  request.timeout.ms = 305000
cdcserver_1  |  retry.backoff.ms = 100
cdcserver_1  |  sasl.kerberos.kinit.cmd = /usr/bin/kinit
cdcserver_1  |  sasl.kerberos.min.time.before.relogin = 60000
cdcserver_1  |  sasl.kerberos.service.name = null
cdcserver_1  |  sasl.kerberos.ticket.renew.jitter = 0.05
cdcserver_1  |  sasl.kerberos.ticket.renew.window.factor = 0.8
cdcserver_1  |  sasl.mechanism = GSSAPI
cdcserver_1  |  security.protocol = PLAINTEXT
cdcserver_1  |  send.buffer.bytes = 131072
cdcserver_1  |  session.timeout.ms = 30000
cdcserver_1  |  ssl.cipher.suites = null
cdcserver_1  |  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
cdcserver_1  |  ssl.endpoint.identification.algorithm = null
cdcserver_1  |  ssl.key.password = null
cdcserver_1  |  ssl.keymanager.algorithm = SunX509
cdcserver_1  |  ssl.keystore.location = null
cdcserver_1  |  ssl.keystore.password = null
cdcserver_1  |  ssl.keystore.type = JKS
cdcserver_1  |  ssl.protocol = TLS
cdcserver_1  |  ssl.provider = null
cdcserver_1  |  ssl.secure.random.implementation = null
cdcserver_1  |  ssl.trustmanager.algorithm = PKIX
cdcserver_1  |  ssl.truststore.location = null
cdcserver_1  |  ssl.truststore.password = null
cdcserver_1  |  ssl.truststore.type = JKS
cdcserver_1  |  value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
cdcserver_1  | 
cdcserver_1  | 21:32:24.019 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.clients.consumer.KafkaConsumer - Starting the Kafka consumer
cdcserver_1  | 21:32:24.020 [Curator-ConnectionStateManager-0]       INFO  o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values: 
cdcserver_1  |  auto.commit.interval.ms = 1000
cdcserver_1  |  auto.offset.reset = earliest
cdcserver_1  |  bootstrap.servers = [kafka:9092]
cdcserver_1  |  check.crcs = true
cdcserver_1  |  client.id = consumer-1
cdcserver_1  |  connections.max.idle.ms = 540000
cdcserver_1  |  enable.auto.commit = false
cdcserver_1  |  exclude.internal.topics = true
cdcserver_1  |  fetch.max.bytes = 52428800
cdcserver_1  |  fetch.max.wait.ms = 500
cdcserver_1  |  fetch.min.bytes = 1
cdcserver_1  |  group.id = 88c3f9ed-2063-4740-8a87-53ab288411d7
cdcserver_1  |  heartbeat.interval.ms = 3000
cdcserver_1  |  interceptor.classes = null
cdcserver_1  |  key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
cdcserver_1  |  max.partition.fetch.bytes = 1048576
cdcserver_1  |  max.poll.interval.ms = 300000
cdcserver_1  |  max.poll.records = 500
cdcserver_1  |  metadata.max.age.ms = 300000
cdcserver_1  |  metric.reporters = []
cdcserver_1  |  metrics.num.samples = 2
cdcserver_1  |  metrics.sample.window.ms = 30000
cdcserver_1  |  partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
cdcserver_1  |  receive.buffer.bytes = 65536
cdcserver_1  |  reconnect.backoff.ms = 50
cdcserver_1  |  request.timeout.ms = 305000
cdcserver_1  |  retry.backoff.ms = 100
cdcserver_1  |  sasl.kerberos.kinit.cmd = /usr/bin/kinit
cdcserver_1  |  sasl.kerberos.min.time.before.relogin = 60000
cdcserver_1  |  sasl.kerberos.service.name = null
cdcserver_1  |  sasl.kerberos.ticket.renew.jitter = 0.05
cdcserver_1  |  sasl.kerberos.ticket.renew.window.factor = 0.8
cdcserver_1  |  sasl.mechanism = GSSAPI
cdcserver_1  |  security.protocol = PLAINTEXT
cdcserver_1  |  send.buffer.bytes = 131072
cdcserver_1  |  session.timeout.ms = 30000
cdcserver_1  |  ssl.cipher.suites = null
cdcserver_1  |  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
cdcserver_1  |  ssl.endpoint.identification.algorithm = null
cdcserver_1  |  ssl.key.password = null
cdcserver_1  |  ssl.keymanager.algorithm = SunX509
cdcserver_1  |  ssl.keystore.location = null
cdcserver_1  |  ssl.keystore.password = null
cdcserver_1  |  ssl.keystore.type = JKS
cdcserver_1  |  ssl.protocol = TLS
cdcserver_1  |  ssl.provider = null
cdcserver_1  |  ssl.secure.random.implementation = null
cdcserver_1  |  ssl.trustmanager.algorithm = PKIX
cdcserver_1  |  ssl.truststore.location = null
cdcserver_1  |  ssl.truststore.password = null
cdcserver_1  |  ssl.truststore.type = JKS
cdcserver_1  |  value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
cdcserver_1  | 
cdcserver_1  | 21:32:24.023 [Curator-ConnectionStateManager-0]       DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(id = null, nodes = [kafka:9092 (id: -1 rack: null)], partitions = [])
cdcserver_1  | 21:32:24.023 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name connections-closed:
cdcserver_1  | 21:32:24.024 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name connections-created:
cdcserver_1  | 21:32:24.025 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:
cdcserver_1  | 21:32:24.025 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:
cdcserver_1  | 21:32:24.027 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-received:
cdcserver_1  | 21:32:24.028 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name select-time:
cdcserver_1  | 21:32:24.029 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name io-time:
cdcserver_1  | 21:32:24.059 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name heartbeat-latency
cdcserver_1  | 21:32:24.061 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name join-latency
cdcserver_1  | 21:32:24.062 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name sync-latency
cdcserver_1  | 21:32:24.068 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name commit-latency
cdcserver_1  | 21:32:24.083 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name bytes-fetched
cdcserver_1  | 21:32:24.086 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name records-fetched
cdcserver_1  | 21:32:24.088 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name fetch-latency
cdcserver_1  | 21:32:24.090 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name records-lag
cdcserver_1  | 21:32:24.091 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name fetch-throttle-time
cdcserver_1  | 21:32:24.093 [Curator-ConnectionStateManager-0]       INFO  o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.1.1
cdcserver_1  | 21:32:24.093 [Curator-ConnectionStateManager-0]       INFO  o.a.kafka.common.utils.AppInfoParser - Kafka commitId : f10ef2720b03b247
cdcserver_1  | 21:32:24.094 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.clients.consumer.KafkaConsumer - Kafka consumer created
cdcserver_1  | 21:32:24.128 [Curator-ConnectionStateManager-0]       DEBUG o.apache.kafka.clients.NetworkClient - Initiating connection to node -1 at kafka:9092.
cdcserver_1  | 21:32:24.139 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent
cdcserver_1  | 21:32:24.141 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received
cdcserver_1  | 21:32:24.142 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node--1.latency
cdcserver_1  | 21:32:24.143 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
cdcserver_1  | 21:32:24.143 [Curator-ConnectionStateManager-0]       DEBUG o.apache.kafka.clients.NetworkClient - Completed connection to node -1
cdcserver_1  | 21:32:24.230 [Curator-ConnectionStateManager-0]       DEBUG o.apache.kafka.clients.NetworkClient - Sending metadata request {topics=[]} to node -1
cdcserver_1  | 21:32:24.264 [Curator-ConnectionStateManager-0]       DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 2 to Cluster(id = _-7S8XwNRyq3gIurHCm_hQ, nodes = [10.200.10.1:9092 (id: 0 rack: null)], partitions = [])
cdcserver_1  | 21:32:24.273 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.clients.consumer.KafkaConsumer - Subscribed to topic(s): db.history.topic
cdcserver_1  | 21:32:24.274 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending coordinator request for group 88c3f9ed-2063-4740-8a87-53ab288411d7 to broker 10.200.10.1:9092 (id: 0 rack: null)
cdcserver_1  | 21:32:24.278 [Curator-ConnectionStateManager-0]       DEBUG o.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at 10.200.10.1:9092.
cdcserver_1  | 21:32:24.280 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node-0.bytes-sent
cdcserver_1  | 21:32:24.287 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node-0.bytes-received
cdcserver_1  | 21:32:24.288 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node-0.latency
cdcserver_1  | 21:32:24.288 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 0
cdcserver_1  | 21:32:24.288 [Curator-ConnectionStateManager-0]       DEBUG o.apache.kafka.clients.NetworkClient - Completed connection to node 0
cdcserver_1  | 21:32:24.295 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.c.i.AbstractCoordinator - Received group coordinator response ClientResponse(receivedTimeMs=1502919144294, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@dddf21d, request=RequestSend(header={api_key=10,api_version=0,correlation_id=2,client_id=consumer-1}, body={group_id=88c3f9ed-2063-4740-8a87-53ab288411d7}), createdTimeMs=1502919144277, sendTimeMs=1502919144289), responseBody={error_code=0,coordinator={node_id=0,host=10.200.10.1,port=9092}})
cdcserver_1  | 21:32:24.295 [Curator-ConnectionStateManager-0]       INFO  o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator 10.200.10.1:9092 (id: 2147483647 rack: null) for group 88c3f9ed-2063-4740-8a87-53ab288411d7.
cdcserver_1  | 21:32:24.296 [Curator-ConnectionStateManager-0]       DEBUG o.apache.kafka.clients.NetworkClient - Initiating connection to node 2147483647 at 10.200.10.1:9092.
cdcserver_1  | 21:32:24.300 [Curator-ConnectionStateManager-0]       INFO  o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned partitions [] for group 88c3f9ed-2063-4740-8a87-53ab288411d7
cdcserver_1  | 21:32:24.301 [Curator-ConnectionStateManager-0]       INFO  o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group 88c3f9ed-2063-4740-8a87-53ab288411d7
cdcserver_1  | 21:32:24.305 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending JoinGroup ({group_id=88c3f9ed-2063-4740-8a87-53ab288411d7,session_timeout=30000,rebalance_timeout=300000,member_id=,protocol_type=consumer,group_protocols=[{protocol_name=range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=28 cap=28]}]}) to coordinator 10.200.10.1:9092 (id: 2147483647 rack: null)
cdcserver_1  | 21:32:24.308 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.bytes-sent
cdcserver_1  | 21:32:24.309 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.bytes-received
cdcserver_1  | 21:32:24.310 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.latency
cdcserver_1  | 21:32:24.311 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.network.Selector - Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483647
cdcserver_1  | 21:32:24.312 [Curator-ConnectionStateManager-0]       DEBUG o.apache.kafka.clients.NetworkClient - Completed connection to node 2147483647
cdcserver_1  | 21:32:24.319 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.c.i.AbstractCoordinator - Received successful join group response for group 88c3f9ed-2063-4740-8a87-53ab288411d7: {error_code=0,generation_id=1,group_protocol=range,leader_id=consumer-1-a743c822-9d98-4a29-9d43-fa38508b163b,member_id=consumer-1-a743c822-9d98-4a29-9d43-fa38508b163b,members=[{member_id=consumer-1-a743c822-9d98-4a29-9d43-fa38508b163b,member_metadata=java.nio.HeapByteBuffer[pos=0 lim=28 cap=28]}]}
cdcserver_1  | 21:32:24.339 [Curator-ConnectionStateManager-0]       DEBUG o.apache.kafka.clients.NetworkClient - Sending metadata request {topics=[db.history.topic]} to node 0
cdcserver_1  | 21:32:24.342 [Curator-ConnectionStateManager-0]       DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 3 to Cluster(id = _-7S8XwNRyq3gIurHCm_hQ, nodes = [10.200.10.1:9092 (id: 0 rack: null)], partitions = [Partition(topic = db.history.topic, partition = 0, leader = 0, replicas = [0,], isr = [0,])])
cdcserver_1  | 21:32:24.343 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.c.i.ConsumerCoordinator - Performing assignment for group 88c3f9ed-2063-4740-8a87-53ab288411d7 using strategy range with subscriptions {consumer-1-a743c822-9d98-4a29-9d43-fa38508b163b=Subscription(topics=[db.history.topic])}
cdcserver_1  | 21:32:24.345 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.c.i.ConsumerCoordinator - Finished assignment for group 88c3f9ed-2063-4740-8a87-53ab288411d7: {consumer-1-a743c822-9d98-4a29-9d43-fa38508b163b=Assignment(partitions=[db.history.topic-0])}
cdcserver_1  | 21:32:24.347 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending leader SyncGroup for group 88c3f9ed-2063-4740-8a87-53ab288411d7 to coordinator 10.200.10.1:9092 (id: 2147483647 rack: null): {group_id=88c3f9ed-2063-4740-8a87-53ab288411d7,generation_id=1,member_id=consumer-1-a743c822-9d98-4a29-9d43-fa38508b163b,group_assignment=[{member_id=consumer-1-a743c822-9d98-4a29-9d43-fa38508b163b,member_assignment=java.nio.HeapByteBuffer[pos=0 lim=36 cap=36]}]}
cdcserver_1  | 21:32:24.363 [Curator-ConnectionStateManager-0]       INFO  o.a.k.c.c.i.AbstractCoordinator - Successfully joined group 88c3f9ed-2063-4740-8a87-53ab288411d7 with generation 1
cdcserver_1  | 21:32:24.369 [Curator-ConnectionStateManager-0]       INFO  o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions [db.history.topic-0] for group 88c3f9ed-2063-4740-8a87-53ab288411d7
cdcserver_1  | 21:32:24.373 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.c.i.ConsumerCoordinator - Group 88c3f9ed-2063-4740-8a87-53ab288411d7 fetching committed offsets for partitions: [db.history.topic-0]
cdcserver_1  | 21:32:24.388 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.c.i.ConsumerCoordinator - Group 88c3f9ed-2063-4740-8a87-53ab288411d7 has no committed offset for partition db.history.topic-0
cdcserver_1  | 21:32:24.389 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.consumer.internals.Fetcher - Resetting offset for partition db.history.topic-0 to earliest offset.
cdcserver_1  | 21:32:24.405 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.c.consumer.internals.Fetcher - Fetched {timestamp=-1, offset=0} for partition db.history.topic-0
cdcserver_1  | 21:32:24.835 [main-SendThread(zookeeper:2181)]       DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15dec9850990002, packet:: clientPath:/zookeeper/config serverPath:/zookeeper/config finished:false header:: 4,4  replyHeader:: 4,195,-101  request:: '/zookeeper/config,T  response::  
cdcserver_1  | 21:32:24.979 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name topic.db.history.topic.bytes-fetched
cdcserver_1  | 21:32:24.981 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Added sensor with name topic.db.history.topic.records-fetched
cdcserver_1  | 21:32:26.381 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name connections-closed:
cdcserver_1  | 21:32:26.382 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name connections-created:
cdcserver_1  | 21:32:26.382 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent-received:
cdcserver_1  | 21:32:26.383 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent:
cdcserver_1  | 21:32:26.384 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name bytes-received:
cdcserver_1  | 21:32:26.385 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name select-time:
cdcserver_1  | 21:32:26.395 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name io-time:
cdcserver_1  | 21:32:26.396 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name node--1.bytes-sent
cdcserver_1  | 21:32:26.397 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name node--1.bytes-received
cdcserver_1  | 21:32:26.398 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name node--1.latency
cdcserver_1  | 21:32:26.398 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name node-0.bytes-sent
cdcserver_1  | 21:32:26.399 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name node-0.bytes-received
cdcserver_1  | 21:32:26.399 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name node-0.latency
cdcserver_1  | 21:32:26.400 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name node-2147483647.bytes-sent
cdcserver_1  | 21:32:26.400 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name node-2147483647.bytes-received
cdcserver_1  | 21:32:26.401 [Curator-ConnectionStateManager-0]       DEBUG o.a.kafka.common.metrics.Metrics - Removed sensor with name node-2147483647.latency
cdcserver_1  | 21:32:26.402 [Curator-ConnectionStateManager-0]       DEBUG o.a.k.clients.consumer.KafkaConsumer - The Kafka consumer has closed.
cdcserver_1  | 21:32:26.427 [Curator-ConnectionStateManager-0]       DEBUG c.n.e.cdc.mysql.MySqlBinaryLogClient - Starting with com.networknt.eventuate.cdc.common.BinlogFileOffset@2780954b[binlogFilename=,offset=4]
cdcserver_1  | Aug 16, 2017 9:32:26 PM com.github.shyiko.mysql.binlog.BinaryLogClient connect
cdcserver_1  | INFO: Connected to mysql:3306 at /4 (sid:1502919141556, cid:5)
cdcserver_1  | 21:32:26.634 [Curator-ConnectionStateManager-0]       DEBUG c.n.e.c.m.EventTableChangesToAggregateTopicTranslator - Started CDC Kafka publisher
cdcserver_1  | 21:32:26.635 [Curator-ConnectionStateManager-0]       DEBUG c.n.e.c.m.EventTableChangesToAggregateTopicTranslator - TakeLeadership returning
cdcserver_1  | 21:32:38.174 [main-SendThread(zookeeper:2181)]       DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x15dec9850990002 after 1ms

everything works great until I want to run the rest-query service that gives me that message, but if I remove from the StartupHookProvider the line com.networknt.eventuate.client.EventuateClientStartupHookProvider it runs but when I run this command

curl -X GET http://localhost:8082/v1/todos

returns empty, but I think thats because the rest-query service hasn't been subscribed to the events

stevehu commented 7 years ago

You shouldn't remove the startupHookProvider. what is the error message you got if you leave it there. Also, if you don't create any todo item, then the query should be empty by default.

In addition, have you created the todo database in mysql? Thanks.

notesby commented 7 years ago

yes, I have created the database, the error I got is the first one I commented

screen shot 2017-08-17 at 10 20 07 am screen shot 2017-08-17 at 10 13 38 am screen shot 2017-08-17 at 10 10 16 am screen shot 2017-08-17 at 10 20 11 am

notesby commented 7 years ago

I have been reading the code and it seems that kafka is not reading the boostrap.servers from the configuration

if (addresses.isEmpty())
            throw new ConfigException("No resolvable bootstrap urls given in " + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG);
notesby commented 7 years ago

Okay, I solved the problem was that the file kafka.yml was missing from:

light-example-4j/eventuate/todo-list/rest-query/src/main/resources/config

Thank you 😄

stevehu commented 7 years ago

@notesby Thanks a lot for opening this issue. Based on the information you have provided. We have updated the tutorial to added the step to create kafka.yml and update StartupHookProvider to subscribe the events in rest-query.

Your detailed issue report greatly helped us to identify the root cause. If you encounter any other issues, please let us know. Together we can improve this tutorial and make it easier to follow by most developer.