Closed prakashid2 closed 4 years ago
What is the output of docker ps -a
?
What is the output of
docker ps -a
?
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d6a0622db806 ftgo-application-master_ftgo-order-history-service "/bin/sh -c 'java ${" 3 hours ago Exited (1) 2 hours ago ftgo-application-master_ftgo-order-history-service_1 fb26b4c2c593 ftgo-application-master_ftgo-kitchen-service "/bin/sh -c 'java ${" 3 hours ago Exited (1) 2 hours ago ftgo-application-master_ftgo-kitchen-service_1 765c0049d14f ftgo-application-master_ftgo-consumer-service "/bin/sh -c 'java ${" 3 hours ago Exited (1) 2 hours ago ftgo-application-master_ftgo-consumer-service_1 39b4be440915 ftgo-application-master_ftgo-order-service "/bin/sh -c 'java ${" 3 hours ago Exited (1) 2 hours ago ftgo-application-master_ftgo-order-service_1 e11b2a5f1bf1 ftgo-application-master_ftgo-accounting-service "/bin/sh -c 'java ${" 3 hours ago Exited (1) 2 hours ago ftgo-application-master_ftgo-accounting-service_1 14278276b42e ftgo-application-master_ftgo-restaurant-service "/bin/sh -c 'java ${" 3 hours ago Exited (137) 2 hours ago ftgo-application-master_ftgo-restaurant-service_1 32e478759f46 eventuateio/eventuate-cdc-service:0.4.0.RELEASE "/bin/sh -c 'java ${" 3 hours ago Exited (1) 2 hours ago ftgo-application-master_cdc-service_1 47110c01a6a7 eventuateio/eventuate-kafka:0.3.0.RELEASE "/bin/bash -c ./run-" 3 hours ago Exited (137) 2 hours ago ftgo-application-master_kafka_1 7b6d9f6411f0 ftgo-application-master_dynamodblocal-init "/bin/sh -c './wait-" 3 hours ago Up 3 hours (healthy) ftgo-application-master_dynamodblocal-init_1 37f828ac67e0 openzipkin/zipkin:2.5.0 "/bin/sh -c 'test -n" 3 hours ago Up 3 hours 9410/t cp, 0.0.0.0:9411->9411/tcp ftgo-application-master_zipkin_1 fcd514af0edd eventuateio/eventuate-zookeeper:0.4.0.RELEASE "/usr/local/zookeepe" 3 hours ago Up 3 hours 2888/t cp, 0.0.0.0:2181->2181/tcp, 3888/tcp ftgo-application-master_zookeeper_1 4f61d23cafb7 ftgo-application-master_mysql "docker-entrypoint.s" 3 hours ago Up 3 hours 0.0.0. 0:3306->3306/tcp ftgo-application-master_mysql_1 cdb12f2404fd ftgo-application-master_dynamodblocal "/bin/sh -c 'java -j" 3 hours ago Up 3 hours (healthy) 0.0.0. 0:8000->8000/tcp ftgo-application-master_dynamodblocal_1 dc91e5a7b938 ftgo-application-master_ftgo-api-gateway "/bin/sh -c 'java ${" 3 hours ago Up 3 hours (healthy) 0.0.0. 0:8087->8080/tcp ftgo-application-master_ftgo-api-gateway_1 f6d7fd1eea2c openzipkin/zipkin "/busybox/sh run.sh" 11 days ago Created kind_babbage 2a1513a0c6e3 openzipkin/zipkin "/busybox/sh run.sh" 11 days ago Exited (255) 4 hours ago 9410/ tcp, 0.0.0.0:9411->9411/tcp kind_hellman 05e99c400e01 prakashmum/security-simple:v1 "/bin/sh -c 'java -j" 2 weeks ago Exited (255) 11 days ago 0.0.0. 0:8086->8086/tcp hopeful_merkle 2a7b4ec1b962 docker-spring-boot "java jar docker-spr" 4 months ago Exited (1) 4 months ago dazzling_elgamal fcaedbbe0531 docker-spring-boot "java jar docker-spr" 4 months ago Exited (1) 4 months ago ecstatic_gagarin
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>
It looks like ftgo-application-master_kafka_1 and everything that depends upon it has exited.
What's the output of docker logs ftgo-application-master_kafka_1
?
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>docker logs ftgo-application-master_kafka_1
ZOOKEEPER_CONNECTION_TIMEOUT_MS is not set. Setting to 6000
ADVERTISED_HOST_NAME=192.168.99.100
/usr/local/kafka-config/server.properties -> ./config/server.properties
[2020-04-15 13:10:23,385] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-04-15 13:10:38,647] INFO starting (kafka.server.KafkaServer)
[2020-04-15 13:10:38,661] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
[2020-04-15 13:10:38,936] INFO [ZooKeeperClient] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-04-15 13:10:39,027] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (or
g.apache.zookeeper.ZooKeeper)
[2020-04-15 13:10:39,029] INFO Client environment:host.name=47110c01a6a7 (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:10:39,030] INFO Client environment:java.version=1.8.0_91 (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:10:39,031] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:10:39,031] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:10:39,031] INFO Client environment:java.class.path=/usr/local/kafka_2.12-1.1.0/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/usr/lo
cal/kafka_2.12-1.1.0/bin/../libs/argparse4j-0.7.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/commons-lang3-3.5.jar:/usr/local/kafka_2.12-1.1.0/bin/..
/libs/connect-api-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/connect-file-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/connect-json-1.1.0.
jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/connect-runtime-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/connect-transforms-1.1.0.jar:/usr/local/
kafka_2.12-1.1.0/bin/../libs/guava-20.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/hk2-api-2.5.0-b32.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/hk2-
locator-2.5.0-b32.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/hk2-utils-2.5.0-b32.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jackson-annotations-2.9.
4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jackson-core-2.9.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jackson-databind-2.9.4.jar:/usr/local/kaf
ka_2.12-1.1.0/bin/../libs/jackson-jaxrs-base-2.9.4.jar:/usr/local/kafka2.12-1.1.0/bin/../libs/jackson-jaxrs-json-provider-2.9.4.jar:/usr/local/kafka
2.12-1.1.0/bin/../libs/jackson-module-jaxb-annotations-2.9.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/javassist-3.20.0-GA.jar:/usr/local/kafka_2.12
-1.1.0/bin/../libs/javassist-3.21.0-GA.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/kafka_2.12-1.1.0/bin/../lib
s/javax.inject-1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/javax.inject-2.5.0-b32.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/javax.servlet-api-3.1.
0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/javax.ws.rs-api-2.0.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jersey-client-2.25.1.jar:/usr/local/ka
fka_2.12-1.1.0/bin/../libs/jersey-common-2.25.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jersey-container-servlet-2.25.1.jar:/usr/local/kafka_2.12-
1.1.0/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jersey-guava-2.25.1.jar:/usr/local/kafka_2.12-1.1.0
/bin/../libs/jersey-media-jaxb-2.25.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jersey-server-2.25.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jet
ty-client-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-continuation-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs
/jetty-http-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-io-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-s
ecurity-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-server-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-s
ervlet-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-servlets-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-
util-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka-clients-1.1.0.ja
r:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka-log4j-appender-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka-streams-1.1.0.jar:/usr/local/ka
fka_2.12-1.1.0/bin/../libs/kafka-streams-examples-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka-streams-test-utils-1.1.0.jar:/usr/local/kafk
a_2.12-1.1.0/bin/../libs/kafka-tools-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka_2.12-1.1.0-sources.jar:/usr/local/kafka_2.12-1.1.0/bin/..
/libs/kafka_2.12-1.1.0-test-sources.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka_2.12-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/log4j-1.2
.17.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/lz4-java-1.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/maven-artifact-3.5.2.jar:/usr/local/kafka_2.1
2-1.1.0/bin/../libs/metrics-core-2.2.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../
libs/plexus-utils-3.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/reflections-0.9.11.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/rocksdbjni-5.7.3.ja
r:/usr/local/kafka_2.12-1.1.0/bin/../libs/scala-library-2.12.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/scala-logging_2.12-3.7.2.jar:/usr/local/kaf
ka_2.12-1.1.0/bin/../libs/scala-reflect-2.12.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/slf4j-api-1.7.25.jar:/usr/local/kafka_2.12-1.1.0/bin/../lib
s/slf4j-log4j12-1.7.25.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/snappy-java-1.1.7.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/validation-api-1.1.
0.Final.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/zkclient-0.10.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeep
er.ZooKeeper)
[2020-04-15 13:10:39,037] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/
usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:10:39,038] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:10:39,039] INFO Client environment:java.compiler=
[2020-04-15 13:10:57,031] INFO Log directory '/tmp/kafka-logs' not found, creating it. (kafka.log.LogManager) [2020-04-15 13:10:57,426] INFO Loading logs. (kafka.log.LogManager) [2020-04-15 13:10:57,762] INFO Logs loading complete in 326 ms. (kafka.log.LogManager) [2020-04-15 13:10:58,096] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) [2020-04-15 13:10:58,607] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) [2020-04-15 13:11:19,282] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) [2020-04-15 13:11:21,171] INFO [SocketServer brokerId=0] Started 1 acceptor threads (kafka.network.SocketServer)
[2020-04-15 13:11:23,702] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) [2020-04-15 13:11:23,858] INFO Result of znode creation at /brokers/ids/0 is: OK (kafka.zk.KafkaZkClient) [2020-04-15 13:11:23,895] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(192.168.99.100,9092,ListenerName(PLAINT EXT),PLAINTEXT)) (kafka.zk.KafkaZkClient) [2020-04-15 13:11:23,955] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2020-04-15 13:11:25,432] INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient)
[2020-04-15 13:11:25,534] INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient) [2020-04-15 13:11:26,155] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator) [2020-04-15 13:11:26,193] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator) [2020-04-15 13:11:26,417] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 220 milliseconds. (kafka.coordinator.group.GroupMetadata Manager) [2020-04-15 13:11:26,755] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by wri ting to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager) [2020-04-15 13:11:28,118] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) [2020-04-15 13:11:28,242] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-04-15 13:11:33,017] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThrea
d)
[2020-04-15 13:11:33,789] INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-15 13:11:33,870] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-15 13:11:33,913] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
ZOOKEEPER_CONNECTION_TIMEOUT_MS is not set. Setting to 6000
ADVERTISED_HOST_NAME=192.168.99.100
/usr/local/kafka-config/server.properties -> ./config/server.properties
[2020-04-15 13:58:16,223] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-04-15 13:58:26,097] INFO starting (kafka.server.KafkaServer)
[2020-04-15 13:58:26,110] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
[2020-04-15 13:58:26,387] INFO [ZooKeeperClient] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-04-15 13:58:26,448] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (or
g.apache.zookeeper.ZooKeeper)
[2020-04-15 13:58:26,448] INFO Client environment:host.name=47110c01a6a7 (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:58:26,450] INFO Client environment:java.version=1.8.0_91 (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:58:26,450] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:58:26,451] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:58:26,452] INFO Client environment:java.class.path=/usr/local/kafka_2.12-1.1.0/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/usr/lo
cal/kafka_2.12-1.1.0/bin/../libs/argparse4j-0.7.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/commons-lang3-3.5.jar:/usr/local/kafka_2.12-1.1.0/bin/..
/libs/connect-api-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/connect-file-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/connect-json-1.1.0.
jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/connect-runtime-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/connect-transforms-1.1.0.jar:/usr/local/
kafka_2.12-1.1.0/bin/../libs/guava-20.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/hk2-api-2.5.0-b32.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/hk2-
locator-2.5.0-b32.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/hk2-utils-2.5.0-b32.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jackson-annotations-2.9.
4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jackson-core-2.9.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jackson-databind-2.9.4.jar:/usr/local/kaf
ka_2.12-1.1.0/bin/../libs/jackson-jaxrs-base-2.9.4.jar:/usr/local/kafka2.12-1.1.0/bin/../libs/jackson-jaxrs-json-provider-2.9.4.jar:/usr/local/kafka
2.12-1.1.0/bin/../libs/jackson-module-jaxb-annotations-2.9.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/javassist-3.20.0-GA.jar:/usr/local/kafka_2.12
-1.1.0/bin/../libs/javassist-3.21.0-GA.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/kafka_2.12-1.1.0/bin/../lib
s/javax.inject-1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/javax.inject-2.5.0-b32.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/javax.servlet-api-3.1.
0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/javax.ws.rs-api-2.0.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jersey-client-2.25.1.jar:/usr/local/ka
fka_2.12-1.1.0/bin/../libs/jersey-common-2.25.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jersey-container-servlet-2.25.1.jar:/usr/local/kafka_2.12-
1.1.0/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jersey-guava-2.25.1.jar:/usr/local/kafka_2.12-1.1.0
/bin/../libs/jersey-media-jaxb-2.25.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jersey-server-2.25.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jet
ty-client-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-continuation-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs
/jetty-http-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-io-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-s
ecurity-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-server-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-s
ervlet-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-servlets-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jetty-
util-9.2.24.v20180105.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka-clients-1.1.0.ja
r:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka-log4j-appender-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka-streams-1.1.0.jar:/usr/local/ka
fka_2.12-1.1.0/bin/../libs/kafka-streams-examples-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka-streams-test-utils-1.1.0.jar:/usr/local/kafk
a_2.12-1.1.0/bin/../libs/kafka-tools-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka_2.12-1.1.0-sources.jar:/usr/local/kafka_2.12-1.1.0/bin/..
/libs/kafka_2.12-1.1.0-test-sources.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/kafka_2.12-1.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/log4j-1.2
.17.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/lz4-java-1.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/maven-artifact-3.5.2.jar:/usr/local/kafka_2.1
2-1.1.0/bin/../libs/metrics-core-2.2.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../
libs/plexus-utils-3.1.0.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/reflections-0.9.11.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/rocksdbjni-5.7.3.ja
r:/usr/local/kafka_2.12-1.1.0/bin/../libs/scala-library-2.12.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/scala-logging_2.12-3.7.2.jar:/usr/local/kaf
ka_2.12-1.1.0/bin/../libs/scala-reflect-2.12.4.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/slf4j-api-1.7.25.jar:/usr/local/kafka_2.12-1.1.0/bin/../lib
s/slf4j-log4j12-1.7.25.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/snappy-java-1.1.7.1.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/validation-api-1.1.
0.Final.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/zkclient-0.10.jar:/usr/local/kafka_2.12-1.1.0/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeep
er.ZooKeeper)
[2020-04-15 13:58:26,454] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/
usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:58:26,457] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2020-04-15 13:58:26,458] INFO Client environment:java.compiler=
[2020-04-15 13:58:33,881] INFO Loading logs. (kafka.log.LogManager) [2020-04-15 13:58:34,038] INFO Logs loading complete in 147 ms. (kafka.log.LogManager) [2020-04-15 13:58:34,158] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) [2020-04-15 13:58:34,196] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) [2020-04-15 13:58:42,787] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) [2020-04-15 13:58:43,970] INFO [SocketServer brokerId=0] Started 1 acceptor threads (kafka.network.SocketServer)
[2020-04-15 13:58:46,954] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) [2020-04-15 13:58:47,029] INFO Result of znode creation at /brokers/ids/0 is: OK (kafka.zk.KafkaZkClient) [2020-04-15 13:58:47,062] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(192.168.99.100,9092,ListenerName(PLAINT EXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
[2020-04-15 13:58:48,172] INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient) [2020-04-15 13:58:48,207] INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient)
[2020-04-15 13:58:48,812] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator) [2020-04-15 13:58:48,831] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator) [2020-04-15 13:58:49,015] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 184 milliseconds. (kafka.coordinator.group.GroupMetadata Manager) [2020-04-15 13:58:49,282] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:1000,blockEndProducerId:1999) by writing to Zk with path version 2 (kafka.coordinator.transaction.ProducerIdManager) [2020-04-15 13:58:50,229] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) [2020-04-15 13:58:50,289] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-04-15 13:58:54,626] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThrea d) [2020-04-15 13:58:54,963] INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser) [2020-04-15 13:58:54,969] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser) [2020-04-15 13:58:55,015] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>
Strange. No obvious error.
What's the output of docker logs ftgo-application-master_ftgo-order-service
?
Did you do anything to your machine 3 hours ago?
3-4 hours before I tried the commands mentioned in README.adoc because I wanted to see the application running. Here is the output you requested.
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>docker logs ftgo-application-master_ftgo-order-service Error: No such container: ftgo-application-master_ftgo-order-service
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>
Oops. Mistyped docker logs ftgo-application-master_ftgo-order-service_1
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>docker logs ftgo-application-master_ftgo-order-service_1 2020-04-15 13:11:32.390 INFO [-,,,] 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotat ion.AnnotationConfigApplicationContext@7a92922: startup date [Wed Apr 15 13:11:32 GMT 2020]; root of context hierarchy 2020-04-15 13:11:55.018 INFO [-,,,] 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfi guration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$10aebd90] is not e ligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
. _ _ /\ / '_ () \ \ \ \ ( ( )\ | ' | '| | ' \/ ` | \ \ \ \ \/ _)| |)| | | | | || (| | ) ) ) ) ' |__| .|| ||| |\, | / / / / =========|_|==============|__/=//// :: Spring Boot :: (v2.0.3.RELEASE)
2020-04-15 13:12:32.581 INFO [ftgo-order-service,,,] 1 --- [ main] n.c.f.o.main.OrderServiceMain : No active profile set, fallin
g back to default profiles: default
2020-04-15 13:12:35.443 INFO [ftgo-order-service,,,] 1 --- [ main] ConfigServletWebServerApplicationContext : Refreshing org.springframewor
k.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@4f7d0008: startup date [Wed Apr 15 13:12:35 GMT 2020]; parent: org.sprin
gframework.context.annotation.AnnotationConfigApplicationContext@7a92922
2020-04-15 13:13:39.046 INFO [ftgo-order-service,,,] 1 --- [ main] o.s.b.f.s.DefaultListableBeanFactory : Overriding bean definition fo
r bean 'sagaCommandProducer' with a different definition: replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3;
dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=io.eventuate.tram.sagas.orchestration.SagaOrchestratorConfiguration; factory
MethodName=sagaCommandProducer; initMethodName=null; destroyMethodName=(inferred); defined in io.eventuate.tram.sagas.orchestration.SagaOrchestratorCo
nfiguration] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary
=false; factoryBeanName=net.chrisrichardson.ftgo.orderservice.domain.OrderServiceConfiguration; factoryMethodName=sagaCommandProducer; initMethodName=
null; destroyMethodName=(inferred); defined in net.chrisrichardson.ftgo.orderservice.domain.OrderServiceConfiguration]
2020-04-15 13:14:51.953 INFO [ftgo-order-service,,,] 1 --- [ main] o.s.b.f.s.DefaultListableBeanFactory : Overriding bean definition fo
r bean 'dataSource' with a different definition: replacing [Root bean: class [null]; scope=refresh; abstract=false; lazyInit=false; autowireMode=3; de
pendencyCheck=0; autowireCandidate=false; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Hikari; f
actoryMethodName=dataSource; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure
/jdbc/DataSourceConfiguration$Hikari.class]] with [Root bean: class [org.springframework.aop.scope.ScopedProxyFactoryBean]; scope=; abstract=false; la
zyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=n
ull; destroyMethodName=null; defined in BeanDefinition defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceConfigura
tion$Hikari.class]]
2020-04-15 13:15:22.512 INFO [ftgo-order-service,,,] 1 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=152ca69f-2431-
33f8-8a00-97525ce0a50b
2020-04-15 13:16:46.505 INFO [ftgo-order-service,,,] 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.tra
nsaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfigurati
on$$EnhancerBySpringCGLIB$$f494ba93] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2020-04-15 13:16:55.102 INFO [ftgo-order-service,,,] 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.clo
ud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAut
oConfiguration$$EnhancerBySpringCGLIB$$10aebd90] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-p
roxying)
2020-04-15 13:17:27.197 INFO [ftgo-order-service,,,] 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(
s): 8080 (http)
2020-04-15 13:17:29.325 INFO [ftgo-order-service,,,] 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2020-04-15 13:17:29.330 INFO [ftgo-order-service,,,] 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apac
he Tomcat/8.5.31
2020-04-15 13:17:29.968 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat N
ative library which allows optimal performance in production environments was not found on the java.library.path: [/usr/lib/jvm/java-1.8-openjdk/jre/l
ib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/li
b64:/lib:/usr/lib]
2020-04-15 13:17:37.160 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded
WebApplicationContext
2020-04-15 13:17:37.185 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: i
nitialization completed in 301749 ms
2020-04-15 13:19:56.329 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEnc
odingFilter' to: [/]
2020-04-15 13:19:56.465 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'tracingFilte
r' to: [/]
2020-04-15 13:19:56.473 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'exceptionLog
gingFilter' to: [/]
2020-04-15 13:19:56.477 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'traceIdRespo
nseFilter' to: [/]
2020-04-15 13:19:56.481 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMe
thodFilter' to: [/]
2020-04-15 13:19:56.483 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormC
ontentFilter' to: [/]
2020-04-15 13:19:56.488 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestConte
xtFilter' to: [/]
2020-04-15 13:19:56.488 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpTraceFil
ter' to: [/]
2020-04-15 13:19:56.492 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'webMvcMetric
sFilter' to: [/*]
2020-04-15 13:19:56.495 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Servlet dispatcherServlet map
ped to [/]
2020-04-15 13:19:58.189 DEBUG [ftgo-order-service,,,] 1 --- [ost-startStop-1] n.c.f.o.web.TraceIdResponseFilter : Initializing filter 'traceIdR
esponseFilter'
2020-04-15 13:19:58.192 DEBUG [ftgo-order-service,,,] 1 --- [ost-startStop-1] n.c.f.o.web.TraceIdResponseFilter : Filter 'traceIdResponseFilter
' configured successfully
2020-04-15 13:20:04.848 INFO [ftgo-order-service,,,] 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
Wed Apr 15 13:20:12 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
2020-04-15 13:20:17.913 INFO [ftgo-order-service,,,] 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start complete
d.
Wed Apr 15 13:20:18 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 13:20:18 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 13:20:18 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 13:20:18 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 13:20:19 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 13:20:19 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 13:20:19 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 13:20:19 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 13:20:19 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
2020-04-15 13:20:25.804 INFO [ftgo-order-service,,,] 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Building JPA container Entity
ManagerFactory for persistence unit 'default'
2020-04-15 13:20:26.485 INFO [ftgo-order-service,,,] 1 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing Persist
enceUnitInfo [
name: default
...]
2020-04-15 13:20:33.674 INFO [ftgo-order-service,,,] 1 --- [ main] org.hibernate.Version : HHH000412: Hibernate Core {5.
2.17.Final}
2020-04-15 13:20:33.722 INFO [ftgo-order-service,,,] 1 --- [ main] org.hibernate.cfg.Environment : HHH000206: hibernate.properti
es not found
2020-04-15 13:20:37.263 INFO [ftgo-order-service,,,] 1 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Common
s Annotations {5.0.1.Final}
2020-04-15 13:21:08.220 INFO [ftgo-order-service,,,] 1 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org
.hibernate.dialect.MySQL5Dialect
2020-04-15 13:21:15.075 WARN [ftgo-order-service,,,] 1 --- [ main] o.h.c.a.r.JPAOverriddenAnnotationReader : HHH000207: Property net.chris
richardson.ftgo.common.Money.amount not found in class but described in
2020-04-15 13:22:05.676 WARN [ftgo-order-service,,,] 1 --- [ main] org.apache.kafka.clients.ClientUtils : Removing server kafka:9092 fr om bootstrap.servers as DNS resolution failed for kafka 2020-04-15 13:22:05.861 ERROR [ftgo-order-service,,,] 1 --- [ main] i.e.m.k.b.c.EventuateKafkaConsumer : Error subscribing
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.
2020-04-15 13:22:05.985 WARN [ftgo-order-service,,,] 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ' orderController' defined in URL [jar:file:/ftgo-order-service.jar!/BOOT-INF/classes!/net/chrisrichardson/ftgo/orderservice/web/OrderController.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyExceptio n: Error creating bean with name 'orderService' defined in net.chrisrichardson.ftgo.orderservice.domain.OrderServiceConfiguration: Unsatisfied depende ncy expressed through method 'orderService' parameter 3; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating b ean with name 'createOrderSagaManager': Invocation of init method failed; nested exception is java.lang.RuntimeException: org.apache.kafka.common.Kafk aException: Failed to construct kafka consumer 2020-04-15 13:22:06.130 INFO [ftgo-order-service,,,] 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFact ory for persistence unit 'default' 2020-04-15 13:22:07.707 WARN [ftgo-order-service,,,] 1 --- [ main] z.r.AsyncReporter$BoundedAsyncReporter : Timed out waiting for in-flig ht spans to send 2020-04-15 13:22:07.817 INFO [ftgo-order-service,,,] 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initi ated... 2020-04-15 13:22:08.472 INFO [ftgo-order-service,,,] 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown compl eted. 2020-04-15 13:22:08.509 INFO [ftgo-order-service,,,] 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat] 2020-04-15 13:22:09.086 WARN [ftgo-order-service,,,] 1 --- [ost-startStop-2] o.a.c.loader.WebappClassLoaderBase : The web application [ROOT] ap pears to have started a thread named [AsyncReporter{org.springframework.cloud.sleuth.zipkin2.sender.RestTemplateSender@4f33e066}] but has failed to st op it. This is very likely to create a memory leak. Stack trace of thread: sun.net.www.protocol.http.HttpURLConnection$7.run(HttpURLConnection.java:1142) sun.net.www.protocol.http.HttpURLConnection$7.run(HttpURLConnection.java:1140) java.security.AccessController.doPrivileged(Native Method) sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1139) sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050) sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984) org.springframework.http.client.SimpleBufferingClientHttpRequest.executeInternal(SimpleBufferingClientHttpRequest.java:76) org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48) org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:53) org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:723) org.springframework.cloud.sleuth.zipkin2.sender.ZipkinRestTemplateWrapper.doExecute(ZipkinRestTemplateSenderConfiguration.java:132) org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:658) org.springframework.cloud.sleuth.zipkin2.sender.RestTemplateSender.post(RestTemplateSender.java:112) org.springframework.cloud.sleuth.zipkin2.sender.RestTemplateSender$HttpPostCall.doExecute(RestTemplateSender.java:123) org.springframework.cloud.sleuth.zipkin2.sender.RestTemplateSender$HttpPostCall.doExecute(RestTemplateSender.java:115) zipkin2.Call$Base.execute(Call.java:379) zipkin2.reporter.AsyncReporter$BoundedAsyncReporter.flush(AsyncReporter.java:286) zipkin2.reporter.AsyncReporter$Builder$1.run(AsyncReporter.java:190) 2020-04-15 13:22:10.350 INFO [ftgo-order-service,,,] 1 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 2020-04-15 13:22:10.849 ERROR [ftgo-order-service,,,] 1 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'orderController' defined in URL [jar:file:/ftgo-order
-service.jar!/BOOT-INF/classes!/net/chrisrichardson/ftgo/orderservice/web/OrderController.class]: Unsatisfied dependency expressed through constructor
parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'orderService' defin
ed in net.chrisrichardson.ftgo.orderservice.domain.OrderServiceConfiguration: Unsatisfied dependency expressed through method 'orderService' parameter
3; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'createOrderSagaManager': Invocation of
init method failed; nested exception is java.lang.RuntimeException: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:732) ~[spring-beans-5.0.7.RELEAS
E.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:197) ~[spring-beans-5.0.7.RELEAS
E.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:12
76) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:113
3) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:543) ~[sp
ring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:503) ~[spri
ng-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.7.RELEASE
.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5
.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.7.RELEASE.jar!/:5.
0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.
7.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:760) ~[spring
-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:869) ~[sprin
g-context-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) ~[spring-context-5.0.7.RELEASE.
jar!/:5.0.7.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140) ~[spri
ng-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759) [spring-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:395) [spring-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:327) [spring-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1255) [spring-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1243) [spring-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at net.chrisrichardson.ftgo.orderservice.main.OrderServiceMain.main(OrderServiceMain.java:23) [classes!/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_171]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [ftgo-order-service.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [ftgo-order-service.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [ftgo-order-service.jar:na]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [ftgo-order-service.jar:na]
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'orderService' defined in net.chrisrichards
on.ftgo.orderservice.domain.OrderServiceConfiguration: Unsatisfied dependency expressed through method 'orderService' parameter 3; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'createOrderSagaManager': Invocation of init method failed; ne
sted exception is java.lang.RuntimeException: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:732) ~[spring-beans-5.0.7.RELEAS
E.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:474) ~[spring-beans-5.
0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFacto
ry.java:1256) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:110
5) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:543) ~[sp
ring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:503) ~[spri
ng-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.7.RELEASE
.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5
.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.7.RELEASE.jar!/:5.
0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.
7.RELEASE]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.7.RELEASE.
jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-bea
ns-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans
-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:818) ~[spring-beans-5.0.7.R
ELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:724) ~[spring-beans-5.0.7.RELEAS
E.jar!/:5.0.7.RELEASE]
... 27 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'createOrderSagaManager': Invocation of init method
failed; nested exception is java.lang.RuntimeException: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBe
anPostProcessor.java:138) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCa
pableBeanFactory.java:424) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1700) ~
[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:581) ~[sp
ring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:503) ~[spri
ng-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.7.RELEASE
.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5
.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.7.RELEASE.jar!/:5.
0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.
7.RELEASE]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.7.RELEASE.
jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-bea
ns-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans
-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:818) ~[spring-beans-5.0.7.R
ELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:724) ~[spring-beans-5.0.7.RELEAS
E.jar!/:5.0.7.RELEASE]
... 41 common frames omitted
Caused by: java.lang.RuntimeException: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at io.eventuate.messaging.kafka.basic.consumer.EventuateKafkaConsumer.start(EventuateKafkaConsumer.java:115) ~[eventuate-messaging-kafka-basic
-consumer-0.3.0.RELEASE.jar!/:na]
at io.eventuate.messaging.kafka.consumer.MessageConsumerKafkaImpl.subscribe(MessageConsumerKafkaImpl.java:50) ~[eventuate-messaging-kafka-cons
umer-0.3.0.RELEASE.jar!/:na]
at io.eventuate.tram.consumer.kafka.EventuateTramKafkaMessageConsumer.subscribe(EventuateTramKafkaMessageConsumer.java:23) ~[eventuate-tram-co
nsumer-kafka-0.22.0.RC5.jar!/:na]
at io.eventuate.tram.consumer.common.MessageConsumerImpl.subscribe(MessageConsumerImpl.java:32) ~[eventuate-tram-consumer-common-0.22.0.RC5.ja
r!/:na]
at io.eventuate.tram.sagas.orchestration.SagaManagerImpl.subscribeToReplyChannel(SagaManagerImpl.java:155) ~[eventuate-tram-sagas-orchestratio
n-0.12.0.RC5.jar!/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_171]
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostPr
ocessor.java:365) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotati
onBeanPostProcessor.java:308) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBe
anPostProcessor.java:135) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
... 54 common frames omitted
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.
2020-04-15 13:59:00.405 INFO [-,,,] 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotat ion.AnnotationConfigApplicationContext@77f03bb1: startup date [Wed Apr 15 13:59:00 GMT 2020]; root of context hierarchy 2020-04-15 13:59:15.590 INFO [-,,,] 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfi guration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$29d53d88] is not e ligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
. _ _ /\ / '_ () \ \ \ \ ( ( )\ | ' | '| | ' \/ ` | \ \ \ \ \/ _)| |)| | | | | || (| | ) ) ) ) ' |__| .|| ||| |\, | / / / / =========|_|==============|__/=//// :: Spring Boot :: (v2.0.3.RELEASE)
2020-04-15 13:59:39.122 INFO [ftgo-order-service,,,] 1 --- [ main] n.c.f.o.main.OrderServiceMain : No active profile set, fallin
g back to default profiles: default
2020-04-15 13:59:39.891 INFO [ftgo-order-service,,,] 1 --- [ main] ConfigServletWebServerApplicationContext : Refreshing org.springframewor
k.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@5f341870: startup date [Wed Apr 15 13:59:39 GMT 2020]; parent: org.sprin
gframework.context.annotation.AnnotationConfigApplicationContext@77f03bb1
2020-04-15 14:00:12.094 INFO [ftgo-order-service,,,] 1 --- [ main] o.s.b.f.s.DefaultListableBeanFactory : Overriding bean definition fo
r bean 'sagaCommandProducer' with a different definition: replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3;
dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=io.eventuate.tram.sagas.orchestration.SagaOrchestratorConfiguration; factory
MethodName=sagaCommandProducer; initMethodName=null; destroyMethodName=(inferred); defined in io.eventuate.tram.sagas.orchestration.SagaOrchestratorCo
nfiguration] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary
=false; factoryBeanName=net.chrisrichardson.ftgo.orderservice.domain.OrderServiceConfiguration; factoryMethodName=sagaCommandProducer; initMethodName=
null; destroyMethodName=(inferred); defined in net.chrisrichardson.ftgo.orderservice.domain.OrderServiceConfiguration]
2020-04-15 14:00:34.910 INFO [ftgo-order-service,,,] 1 --- [ main] o.s.b.f.s.DefaultListableBeanFactory : Overriding bean definition fo
r bean 'dataSource' with a different definition: replacing [Root bean: class [null]; scope=refresh; abstract=false; lazyInit=false; autowireMode=3; de
pendencyCheck=0; autowireCandidate=false; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Hikari; f
actoryMethodName=dataSource; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure
/jdbc/DataSourceConfiguration$Hikari.class]] with [Root bean: class [org.springframework.aop.scope.ScopedProxyFactoryBean]; scope=; abstract=false; la
zyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=n
ull; destroyMethodName=null; defined in BeanDefinition defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceConfigura
tion$Hikari.class]]
2020-04-15 14:01:04.097 INFO [ftgo-order-service,,,] 1 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=152ca69f-2431-
33f8-8a00-97525ce0a50b
2020-04-15 14:01:21.038 INFO [ftgo-order-service,,,] 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.tra
nsaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfigurati
on$$EnhancerBySpringCGLIB$$dbb3a8b] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2020-04-15 14:01:23.593 INFO [ftgo-order-service,,,] 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.clo
ud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAut
oConfiguration$$EnhancerBySpringCGLIB$$29d53d88] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-p
roxying)
2020-04-15 14:01:35.231 INFO [ftgo-order-service,,,] 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(
s): 8080 (http)
2020-04-15 14:01:36.702 INFO [ftgo-order-service,,,] 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2020-04-15 14:01:36.710 INFO [ftgo-order-service,,,] 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apac
he Tomcat/8.5.31
2020-04-15 14:01:37.420 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat N
ative library which allows optimal performance in production environments was not found on the java.library.path: [/usr/lib/jvm/java-1.8-openjdk/jre/l
ib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/li
b64:/lib:/usr/lib]
2020-04-15 14:01:40.939 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded
WebApplicationContext
2020-04-15 14:01:40.946 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: i
nitialization completed in 121051 ms
2020-04-15 14:02:47.698 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEnc
odingFilter' to: [/]
2020-04-15 14:02:47.749 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'tracingFilte
r' to: [/]
2020-04-15 14:02:47.762 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'exceptionLog
gingFilter' to: [/]
2020-04-15 14:02:47.764 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'traceIdRespo
nseFilter' to: [/]
2020-04-15 14:02:47.765 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMe
thodFilter' to: [/]
2020-04-15 14:02:47.765 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormC
ontentFilter' to: [/]
2020-04-15 14:02:47.765 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestConte
xtFilter' to: [/]
2020-04-15 14:02:47.766 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpTraceFil
ter' to: [/]
2020-04-15 14:02:47.768 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'webMvcMetric
sFilter' to: [/*]
2020-04-15 14:02:47.775 INFO [ftgo-order-service,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Servlet dispatcherServlet map
ped to [/]
2020-04-15 14:02:50.030 DEBUG [ftgo-order-service,,,] 1 --- [ost-startStop-1] n.c.f.o.web.TraceIdResponseFilter : Initializing filter 'traceIdR
esponseFilter'
2020-04-15 14:02:50.071 DEBUG [ftgo-order-service,,,] 1 --- [ost-startStop-1] n.c.f.o.web.TraceIdResponseFilter : Filter 'traceIdResponseFilter
' configured successfully
2020-04-15 14:02:52.374 INFO [ftgo-order-service,,,] 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
Wed Apr 15 14:02:55 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
2020-04-15 14:02:57.971 INFO [ftgo-order-service,,,] 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start complete
d.
Wed Apr 15 14:02:58 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 14:02:58 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 14:02:58 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 14:02:58 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 14:02:58 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 14:02:58 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 14:02:58 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 14:02:58 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
Wed Apr 15 14:02:58 GMT 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+,
5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications
not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL
=true and provide truststore for server certificate verification.
2020-04-15 14:02:59.329 INFO [ftgo-order-service,,,] 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Building JPA container Entity
ManagerFactory for persistence unit 'default'
2020-04-15 14:02:59.859 INFO [ftgo-order-service,,,] 1 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing Persist
enceUnitInfo [
name: default
...]
2020-04-15 14:03:02.768 INFO [ftgo-order-service,,,] 1 --- [ main] org.hibernate.Version : HHH000412: Hibernate Core {5.
2.17.Final}
2020-04-15 14:03:02.781 INFO [ftgo-order-service,,,] 1 --- [ main] org.hibernate.cfg.Environment : HHH000206: hibernate.properti
es not found
2020-04-15 14:03:03.295 INFO [ftgo-order-service,,,] 1 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Common
s Annotations {5.0.1.Final}
2020-04-15 14:03:09.317 INFO [ftgo-order-service,,,] 1 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org
.hibernate.dialect.MySQL5Dialect
2020-04-15 14:03:11.150 WARN [ftgo-order-service,,,] 1 --- [ main] o.h.c.a.r.JPAOverriddenAnnotationReader : HHH000207: Property net.chris
richardson.ftgo.common.Money.amount not found in class but described in
2020-04-15 14:03:24.546 WARN [ftgo-order-service,,,] 1 --- [ main] org.apache.kafka.clients.ClientUtils : Removing server kafka:9092 fr om bootstrap.servers as DNS resolution failed for kafka 2020-04-15 14:03:24.576 ERROR [ftgo-order-service,,,] 1 --- [ main] i.e.m.k.b.c.EventuateKafkaConsumer : Error subscribing
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.
2020-04-15 14:03:24.583 WARN [ftgo-order-service,,,] 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ' orderController' defined in URL [jar:file:/ftgo-order-service.jar!/BOOT-INF/classes!/net/chrisrichardson/ftgo/orderservice/web/OrderController.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyExceptio n: Error creating bean with name 'orderService' defined in net.chrisrichardson.ftgo.orderservice.domain.OrderServiceConfiguration: Unsatisfied depende ncy expressed through method 'orderService' parameter 3; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating b ean with name 'createOrderSagaManager': Invocation of init method failed; nested exception is java.lang.RuntimeException: org.apache.kafka.common.Kafk aException: Failed to construct kafka consumer 2020-04-15 14:03:24.585 INFO [ftgo-order-service,,,] 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFact ory for persistence unit 'default' 2020-04-15 14:03:25.136 INFO [ftgo-order-service,,,] 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initi ated... 2020-04-15 14:03:25.154 INFO [ftgo-order-service,,,] 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown compl eted. 2020-04-15 14:03:25.159 INFO [ftgo-order-service,,,] 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat] 2020-04-15 14:03:25.242 INFO [ftgo-order-service,,,] 1 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 2020-04-15 14:03:25.251 ERROR [ftgo-order-service,,,] 1 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'orderController' defined in URL [jar:file:/ftgo-order
-service.jar!/BOOT-INF/classes!/net/chrisrichardson/ftgo/orderservice/web/OrderController.class]: Unsatisfied dependency expressed through constructor
parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'orderService' defin
ed in net.chrisrichardson.ftgo.orderservice.domain.OrderServiceConfiguration: Unsatisfied dependency expressed through method 'orderService' parameter
3; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'createOrderSagaManager': Invocation of
init method failed; nested exception is java.lang.RuntimeException: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:732) ~[spring-beans-5.0.7.RELEAS
E.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:197) ~[spring-beans-5.0.7.RELEAS
E.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:12
76) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:113
3) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:543) ~[sp
ring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:503) ~[spri
ng-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.7.RELEASE
.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5
.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.7.RELEASE.jar!/:5.
0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.
7.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:760) ~[spring
-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:869) ~[sprin
g-context-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) ~[spring-context-5.0.7.RELEASE.
jar!/:5.0.7.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140) ~[spri
ng-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759) [spring-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:395) [spring-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:327) [spring-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1255) [spring-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1243) [spring-boot-2.0.3.RELEASE.jar!/:2.0.3.RELEASE]
at net.chrisrichardson.ftgo.orderservice.main.OrderServiceMain.main(OrderServiceMain.java:23) [classes!/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_171]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [ftgo-order-service.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [ftgo-order-service.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [ftgo-order-service.jar:na]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [ftgo-order-service.jar:na]
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'orderService' defined in net.chrisrichards
on.ftgo.orderservice.domain.OrderServiceConfiguration: Unsatisfied dependency expressed through method 'orderService' parameter 3; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'createOrderSagaManager': Invocation of init method failed; ne
sted exception is java.lang.RuntimeException: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:732) ~[spring-beans-5.0.7.RELEAS
E.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:474) ~[spring-beans-5.
0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFacto
ry.java:1256) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:110
5) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:543) ~[sp
ring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:503) ~[spri
ng-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.7.RELEASE
.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5
.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.7.RELEASE.jar!/:5.
0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.
7.RELEASE]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.7.RELEASE.
jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-bea
ns-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans
-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:818) ~[spring-beans-5.0.7.R
ELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:724) ~[spring-beans-5.0.7.RELEAS
E.jar!/:5.0.7.RELEASE]
... 27 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'createOrderSagaManager': Invocation of init method
failed; nested exception is java.lang.RuntimeException: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBe
anPostProcessor.java:138) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCa
pableBeanFactory.java:424) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1700) ~
[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:581) ~[sp
ring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:503) ~[spri
ng-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.7.RELEASE
.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5
.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.7.RELEASE.jar!/:5.
0.7.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.
7.RELEASE]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.7.RELEASE.
jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-bea
ns-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans
-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:818) ~[spring-beans-5.0.7.R
ELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:724) ~[spring-beans-5.0.7.RELEAS
E.jar!/:5.0.7.RELEASE]
... 41 common frames omitted
Caused by: java.lang.RuntimeException: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at io.eventuate.messaging.kafka.basic.consumer.EventuateKafkaConsumer.start(EventuateKafkaConsumer.java:115) ~[eventuate-messaging-kafka-basic
-consumer-0.3.0.RELEASE.jar!/:na]
at io.eventuate.messaging.kafka.consumer.MessageConsumerKafkaImpl.subscribe(MessageConsumerKafkaImpl.java:50) ~[eventuate-messaging-kafka-cons
umer-0.3.0.RELEASE.jar!/:na]
at io.eventuate.tram.consumer.kafka.EventuateTramKafkaMessageConsumer.subscribe(EventuateTramKafkaMessageConsumer.java:23) ~[eventuate-tram-co
nsumer-kafka-0.22.0.RC5.jar!/:na]
at io.eventuate.tram.consumer.common.MessageConsumerImpl.subscribe(MessageConsumerImpl.java:32) ~[eventuate-tram-consumer-common-0.22.0.RC5.ja
r!/:na]
at io.eventuate.tram.sagas.orchestration.SagaManagerImpl.subscribeToReplyChannel(SagaManagerImpl.java:155) ~[eventuate-tram-sagas-orchestratio
n-0.12.0.RC5.jar!/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_171]
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostPr
ocessor.java:365) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotati
onBeanPostProcessor.java:308) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBe
anPostProcessor.java:135) ~[spring-beans-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
... 54 common frames omitted
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>
I would run the following commands in sequence:
docker-compose up -d kafka
docker-compose up -d cdc-service
docker-compose up -d order-service
After running each command I would wait a while to make sure that the container starts and in the case of the cdc-service/order-service, docker ps
shows it has healthy.
If everything seems ok I would then run docker-compose up -d
so start the remaining services.
If the containers are exiting for no apparent reason, it's possible that they are running out of memory and that you need to reconfigure the Docker VM.
Above two services started successfully. Tried running docker-compose up -d order-service What I need to give 'order-service' or something other? Output as shown below
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>docker-compose up -d order-service ERROR: No such service: order-service
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>
oops. docker-compose up -d ftgo-order-service
If you look in the docker-compose.yml file you can see the container names and fix my typos :-)
I ran following commands. Output does not show any error. But http://192.168.99.100:8889 cannot be reached.
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>docker-compose up -d ftgo-order-service ftgo-application-master_zookeeper_1 is up-to-date ftgo-application-master_zipkin_1 is up-to-date ftgo-application-master_mysql_1 is up-to-date ftgo-application-master_kafka_1 is up-to-date ftgo-application-master_cdc-service_1 is up-to-date Starting ftgo-application-master_ftgo-order-service_1 ... done
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 39b4be440915 ftgo-application-master_ftgo-order-service "/bin/sh -c 'java ${" 4 hours ago Up About a minute (healthy) 0.0.0. 0:8082->8080/tcp ftgo-application-master_ftgo-order-service_1 32e478759f46 eventuateio/eventuate-cdc-service:0.4.0.RELEASE "/bin/sh -c 'java ${" 4 hours ago Up 21 minutes (healthy) 0.0.0. 0:8099->8080/tcp ftgo-application-master_cdc-service_1 47110c01a6a7 eventuateio/eventuate-kafka:0.3.0.RELEASE "/bin/bash -c ./run-" 4 hours ago Up 25 minutes 0.0.0. 0:9092->9092/tcp ftgo-application-master_kafka_1 7b6d9f6411f0 ftgo-application-master_dynamodblocal-init "/bin/sh -c './wait-" 4 hours ago Up 4 hours (healthy) ftgo-application-master_dynamodblocal-init_1 37f828ac67e0 openzipkin/zipkin:2.5.0 "/bin/sh -c 'test -n" 4 hours ago Up 4 hours 9410/t cp, 0.0.0.0:9411->9411/tcp ftgo-application-master_zipkin_1 fcd514af0edd eventuateio/eventuate-zookeeper:0.4.0.RELEASE "/usr/local/zookeepe" 4 hours ago Up 4 hours 2888/t cp, 0.0.0.0:2181->2181/tcp, 3888/tcp ftgo-application-master_zookeeper_1 4f61d23cafb7 ftgo-application-master_mysql "docker-entrypoint.s" 4 hours ago Up 4 hours 0.0.0. 0:3306->3306/tcp ftgo-application-master_mysql_1 cdb12f2404fd ftgo-application-master_dynamodblocal "/bin/sh -c 'java -j" 4 hours ago Up 4 hours (healthy) 0.0.0. 0:8000->8000/tcp ftgo-application-master_dynamodblocal_1 dc91e5a7b938 ftgo-application-master_ftgo-api-gateway "/bin/sh -c 'java ${" 4 hours ago Up 4 hours (healthy) 0.0.0. 0:8087->8080/tcp ftgo-application-master_ftgo-api-gateway_1
D:\AllWorkspaces\manningWorkspace\ftgo-application-master> D:\AllWorkspaces\manningWorkspace\ftgo-application-master> D:\AllWorkspaces\manningWorkspace\ftgo-application-master> D:\AllWorkspaces\manningWorkspace\ftgo-application-master>docker-compose up -d ftgo-application-master_mysql_1 is up-to-date ftgo-application-master_zookeeper_1 is up-to-date ftgo-application-master_dynamodblocal_1 is up-to-date ftgo-application-master_zipkin_1 is up-to-date ftgo-application-master_ftgo-api-gateway_1 is up-to-date ftgo-application-master_kafka_1 is up-to-date ftgo-application-master_dynamodblocal-init_1 is up-to-date ftgo-application-master_cdc-service_1 is up-to-date Starting ftgo-application-master_ftgo-accounting-service_1 ... Starting ftgo-application-master_ftgo-accounting-service_1 ... done Starting ftgo-application-master_ftgo-restaurant-service_1 ... done Starting ftgo-application-master_ftgo-kitchen-service_1 ... done Starting ftgo-application-master_ftgo-order-history-service_1 ... done Starting ftgo-application-master_ftgo-consumer-service_1 ... done
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>
Are you saying that you ran
docker run -p 8889:8888 -e DOCKER_DIAGNOSTICS_PORT=8889 -e DOCKER_HOST_IP \
--rm eventuateio/eventuateio-docker-networking-diagnostics:0.2.0.RELEASE
And it did not work? What was the error?
More importantly:
docker ps -a
?The processes are very short lived about 20 mins and they exit after some time(reason unknown). Now docker ps -a is showing that
I had tried to access swagger UIs immediately after the docker compose got up. But no luck.
Also verified the DOCKER_HOST_IP with following command. Shows success. But actually connection refused when accessing 192.168.99.100:8889
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>docker run -p 8889:8888 -e DOCKER_DIAGNOSTICS_PORT=8889 -e DOCKER_HOST_IP --rm eventuateio/e ventuateio-docker-networking-diagnostics:0.2.0.RELEASE DOCKER_HOST_IP= 192.168.99.100 Server running on port: 8889 About to make HTTP request to self Making HTTP request to self via url= http://192.168.99.100:8889 SUCCESSS!!!!
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>
It's like you have to increase the memory - see https://docs.docker.com/docker-for-windows/#docker-settings-dialog
Ok. I will try to do this by tomorrow. But one question. Does it not show or log the error due to OUT_OF_MEMORY on console or where the logs are stored? Cause I think I should check the log first and then try to increase the memory. Do you know where I can find the relevant log?
Normally errors show up in the container logs - output of docker logs container
However, if containers are exiting without any error messages, I'd suspect its a lack of memory issue that causes termination of the container.
You might see messages the docker logs: https://docs.docker.com/config/daemon/
Hey Chris.
I've Windows 8.1 laptop and Docker Desktop can only be installed on Windows 10 or higher. Correct me if I'm wrong. I've installed Docker ToolBox for Windows 8.1. that's DockerToolbox-19.03.1.exe and I don't see Docker Desktop icon in notifications area. How can I set or increase memory?
Ic. You need to recreate the Docker Toolbox VM with more memory - google 'docker toolbox memory settings'
Chris.
When I'm working in normal Windows command prompt and when using Docker Quickstart Terminal(opens in MINGW64 CLI. a linux like environment) the output of following command differs
docker run -p 8889:8888 -e DOCKER_DIAGNOSTICS_PORT=8889 -e DOCKER_HOST_IP --rm eventuateio/eventuateio-docker-networking-diagnostics:0.2.0.RELEASE
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>set DOCKER_HOST_IP=192.168.99.100
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>echo %DOCKER_HOST_IP% 192.168.99.100
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>docker run -p 8889:8888 -e DOCKER_DIAGNOSTICS_PORT=8889 -e DOCKER_HOST_IP --rm eventuateio/e ventuateio-docker-networking-diagnostics:0.2.0.RELEASE DOCKER_HOST_IP= 192.168.99.100 Server running on port: 8889 About to make HTTP request to self Making HTTP request to self via url= http://192.168.99.100:8889 SUCCESSS!!!!
D:\AllWorkspaces\manningWorkspace\ftgo-application-master>
But when using MINGW64 terminal I'm not getting Success message and the http://192.168.99.100:8889 is not accessible. Pls see output below
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
\______ o __/
\ \ __/
\____\_______/
docker is configured to use the default machine with IP 192.168.99.100 For help getting started, check out the docs at https://docs.docker.com
Start interactive shell
Lenovo@PRAKASH-LENOVO MINGW64 /c/Program Files/Docker Toolbox $ pwd /c/Program Files/Docker Toolbox
Lenovo@PRAKASH-LENOVO MINGW64 /c/Program Files/Docker Toolbox $ cd /d/AllWorkspaces/manningWorkspace/ftgo-application-master/
Lenovo@PRAKASH-LENOVO MINGW64 /d/AllWorkspaces/manningWorkspace/ftgo-application-master $ echo $DOCKER_HOST_IP
Lenovo@PRAKASH-LENOVO MINGW64 /d/AllWorkspaces/manningWorkspace/ftgo-application-master $ DOCKER_HOST_IP=192.168.99.100
Lenovo@PRAKASH-LENOVO MINGW64 /d/AllWorkspaces/manningWorkspace/ftgo-application-master $ echo $DOCKER_HOST_IP 192.168.99.100
Lenovo@PRAKASH-LENOVO MINGW64 /d/AllWorkspaces/manningWorkspace/ftgo-application-master $ docker run -p 8889:8888 -e DOCKER_DIAGNOSTICS_PORT=8889 -e DOCKER_HOST_IP --rm eventuateio/eventuateio-docker-networking-diagnostics:0.2.0.RELEASE DOCKER_HOST_IP is not set or is blank
Lenovo@PRAKASH-LENOVO MINGW64 /d/AllWorkspaces/manningWorkspace/ftgo-application-master $
Is there a difference in working in normal Windows command prompt and MINGW64 terminal. In both cases I've set the environment variable correctly. It's refecting using the echo command in both cases. But the docker run -p 8889:8888 -e DOCKER_DIAGNOSTICS_PORT=8889 -e DOCKER_HOST_IP --rm eventuateio/eventuateio-docker-networking-diagnostics:0.2.0.RELEASE seems not working (message: DOCKER_HOST_IP is not set or is blank) on MINGW64 terminal but same command works on normal widnows command prompt(shows Success message). Why So?
Is my DOCKER_HOST_IP enviroment variable not set properly on MINGW64 terminal? Could this be the cause of Swagger UIs also not accessible? Do you know how to overcome this issue?
Pls see attachments
Thanks Prakash S. Mumbai, India
Swagger UIs not accessible
docker logs container
Hi Chris.
You pointed out that I may have to start docker-machine with more memory. I did that and now it uses 2 cpus and 6144MB=6GB of memory earlier was with one cpu and 2GB. I correctly followed what you told to bring each service on docker one by one. for e.g. docker-compose up -d kafka docker-compose up -d cdc-service docker-compose up -d ftgo-order-service
and finally ran docker-compose up -d
to bring up all the services and I was monitoring each service status(whether up and running or exited) using docker ps -a. I noticed that 3-4 services got exited. ftgo-order-service also one of the service exited. Hence I ran docker logs container(for ftgo-order-service). The service is exiting not because of out of memory but there is some other exception as you can see the stack trace given below.
-- Exception stack trace for ftgo-order-service -- 2020-04-19 14:42:53.036 ERROR [ftgo-order-service,,,] 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception duri ng pool initialization.
java.sql.SQLException: Access denied for user 'ftgo_order_service_user'@'172.19.0.7' (using password: YES)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3976) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3912) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:871) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1714) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1224) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2190) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2221) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2016) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
at com.mysql.jdbc.ConnectionImpl.
2020-04-19 14:42:53.057 WARN [ftgo-order-service,,,] 1 --- [ main] o.s.b.a.orm.jpa.DatabaseLookup : Unable to determine jdbc url from datasource
Please suggest how can I overcome this issue.
Thanks Prakash S. Mumbai, India
Mistakenly clicked Close button. Reopened again
ava.sql.SQLException: Access denied for user 'ftgo_order_service_user'@'172.19.0.7' (using password: YES) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
This is a database problem. What's in the mysql container's log?
-- mysql container's log below --
D:\intelliJWorkspaces\manningWorkspace\ftgo-application> D:\intelliJWorkspaces\manningWorkspace\ftgo-application>docker logs 4f61d23cafb7 Initializing database 2020-04-15T13:10:06.741249Z 0 [Warning] InnoDB: New log files created, LSN=45790 2020-04-15T13:10:07.107836Z 0 [Warning] InnoDB: Creating foreign key constraint system tables. 2020-04-15T13:10:07.211231Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. G enerating a new UUID: 6d8ddc3c-7f1a-11ea-93f4-0242ac120004. 2020-04-15T13:10:07.212138Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2020-04-15T13:10:07.213112Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure opt ion. 2020-04-15T13:10:10.398184Z 1 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:10.402872Z 1 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:10.403030Z 1 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:10.403087Z 1 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:10.403166Z 1 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode. Database initialized MySQL init process in progress... MySQL init process in progress... 2020-04-15T13:10:16.102565Z 0 [Note] mysqld (mysqld 5.7.13-log) starting as process 49 ... 2020-04-15T13:10:16.227844Z 0 [Note] InnoDB: PUNCH HOLE support available 2020-04-15T13:10:16.228046Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2020-04-15T13:10:16.228092Z 0 [Note] InnoDB: Uses event mutexes 2020-04-15T13:10:16.228313Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2020-04-15T13:10:16.228827Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8 2020-04-15T13:10:16.232164Z 0 [Note] InnoDB: Using Linux native AIO 2020-04-15T13:10:16.237667Z 0 [Note] InnoDB: Number of pools: 1 2020-04-15T13:10:16.244032Z 0 [Note] InnoDB: Using CPU crc32 instructions 2020-04-15T13:10:16.265154Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2020-04-15T13:10:16.384853Z 0 [Note] InnoDB: Completed initialization of buffer pool 2020-04-15T13:10:16.466136Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2020-04-15T13:10:16.502202Z 0 [Note] InnoDB: Highest supported file format is Barracuda. 2020-04-15T13:10:16.678977Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2020-04-15T13:10:16.681815Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2020-04-15T13:10:16.921331Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2020-04-15T13:10:16.928939Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active. 2020-04-15T13:10:16.948207Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active. 2020-04-15T13:10:16.952514Z 0 [Note] InnoDB: Waiting for purge to start 2020-04-15T13:10:17.172783Z 0 [Note] InnoDB: 5.7.13 started; log sequence number 2525487 2020-04-15T13:10:17.200312Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 2020-04-15T13:10:17.203289Z 0 [Note] Plugin 'FEDERATED' is disabled. 2020-04-15T13:10:17.462227Z 0 [Note] InnoDB: Buffer pool(s) load completed at 200415 13:10:17 2020-04-15T13:10:17.586245Z 0 [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key MySQL init process in progress... 2020-04-15T13:10:17.695888Z 0 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:17.696569Z 0 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:17.700589Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:17.702099Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:18.363557Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:18.840328Z 0 [Note] Event Scheduler: Loaded 0 events 2020-04-15T13:10:18.844081Z 0 [Note] mysqld: ready for connections. Version: '5.7.13-log' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server (GPL) Warning: Unable to load '/usr/share/zoneinfo/Factory' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/posix/Factory' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/right/Factory' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it. 2020-04-15T13:10:50.144225Z 4 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:50.151738Z 4 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:50.162170Z 4 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode. mysql: [Warning] Using a password on the command line interface can be insecure. mysql: [Warning] Using a password on the command line interface can be insecure. 2020-04-15T13:10:50.315767Z 6 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:50.316742Z 6 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:10:50.317343Z 6 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/3.common-schema.sql mysql: [Warning] Using a password on the command line interface can be insecure.
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/4.compile-schema-per-service.sh
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/5.schema-per-service.sql mysql: [Warning] Using a password on the command line interface can be insecure.
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/template
2020-04-15T13:10:54.082804Z 0 [Note] Giving 0 client threads a chance to die gracefully 2020-04-15T13:10:54.084183Z 0 [Note] Shutting down slave threads 2020-04-15T13:10:54.085677Z 0 [Note] Forcefully disconnecting 0 remaining clients 2020-04-15T13:10:54.085853Z 0 [Note] Event Scheduler: Purging the queue. 0 events 2020-04-15T13:10:54.093105Z 0 [Note] Binlog end 2020-04-15T13:10:54.104765Z 0 [Note] Shutting down plugin 'ngram' 2020-04-15T13:10:54.105106Z 0 [Note] Shutting down plugin 'BLACKHOLE' 2020-04-15T13:10:54.105204Z 0 [Note] Shutting down plugin 'partition' 2020-04-15T13:10:54.105246Z 0 [Note] Shutting down plugin 'ARCHIVE' 2020-04-15T13:10:54.105284Z 0 [Note] Shutting down plugin 'MEMORY' 2020-04-15T13:10:54.106003Z 0 [Note] Shutting down plugin 'INNODB_SYS_VIRTUAL' 2020-04-15T13:10:54.106217Z 0 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES' 2020-04-15T13:10:54.106332Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES' 2020-04-15T13:10:54.106438Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS' 2020-04-15T13:10:54.106495Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN' 2020-04-15T13:10:54.106973Z 0 [Note] Shutting down plugin 'INNODB_SYS_FIELDS' 2020-04-15T13:10:54.107043Z 0 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS' 2020-04-15T13:10:54.107160Z 0 [Note] Shutting down plugin 'INNODB_SYS_INDEXES' 2020-04-15T13:10:54.107872Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS' 2020-04-15T13:10:54.108187Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLES' 2020-04-15T13:10:54.108322Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE' 2020-04-15T13:10:54.108367Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE' 2020-04-15T13:10:54.108404Z 0 [Note] Shutting down plugin 'INNODB_FT_CONFIG' 2020-04-15T13:10:54.108441Z 0 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED' 2020-04-15T13:10:54.108477Z 0 [Note] Shutting down plugin 'INNODB_FT_DELETED' 2020-04-15T13:10:54.108858Z 0 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD' 2020-04-15T13:10:54.108958Z 0 [Note] Shutting down plugin 'INNODB_METRICS' 2020-04-15T13:10:54.109010Z 0 [Note] Shutting down plugin 'INNODB_TEMP_TABLE_INFO' 2020-04-15T13:10:54.109698Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS' 2020-04-15T13:10:54.110090Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU' 2020-04-15T13:10:54.110746Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE' 2020-04-15T13:10:54.111076Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET' 2020-04-15T13:10:54.111143Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX' 2020-04-15T13:10:54.111194Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET' 2020-04-15T13:10:54.111232Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM' 2020-04-15T13:10:54.111269Z 0 [Note] Shutting down plugin 'INNODB_CMP_RESET' 2020-04-15T13:10:54.111847Z 0 [Note] Shutting down plugin 'INNODB_CMP' 2020-04-15T13:10:54.111912Z 0 [Note] Shutting down plugin 'INNODB_LOCK_WAITS' 2020-04-15T13:10:54.111952Z 0 [Note] Shutting down plugin 'INNODB_LOCKS' 2020-04-15T13:10:54.112049Z 0 [Note] Shutting down plugin 'INNODB_TRX' 2020-04-15T13:10:54.112104Z 0 [Note] Shutting down plugin 'InnoDB' 2020-04-15T13:10:54.119538Z 0 [Note] InnoDB: FTS optimize thread exiting. 2020-04-15T13:10:54.125859Z 0 [Note] InnoDB: Starting shutdown... 2020-04-15T13:10:54.227957Z 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 2020-04-15T13:10:54.228289Z 0 [Note] InnoDB: Buffer pool(s) dump completed at 200415 13:10:54 2020-04-15T13:10:57.640354Z 0 [Note] InnoDB: Shutdown completed; log sequence number 12577494 2020-04-15T13:10:57.646100Z 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1" 2020-04-15T13:10:57.646328Z 0 [Note] Shutting down plugin 'MyISAM' 2020-04-15T13:10:57.647010Z 0 [Note] Shutting down plugin 'MRG_MYISAM' 2020-04-15T13:10:57.650283Z 0 [Note] Shutting down plugin 'CSV' 2020-04-15T13:10:57.653426Z 0 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA' 2020-04-15T13:10:57.655059Z 0 [Note] Shutting down plugin 'sha256_password' 2020-04-15T13:10:57.655439Z 0 [Note] Shutting down plugin 'mysql_native_password' 2020-04-15T13:10:57.666297Z 0 [Note] Shutting down plugin 'binlog' 2020-04-15T13:10:57.697752Z 0 [Note] mysqld: Shutdown complete
MySQL init process done. Ready for start up.
2020-04-15T13:11:00.225153Z 0 [Note] mysqld (mysqld 5.7.13-log) starting as process 1 ... 2020-04-15T13:11:00.733943Z 0 [Note] InnoDB: PUNCH HOLE support available 2020-04-15T13:11:00.736699Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2020-04-15T13:11:00.737062Z 0 [Note] InnoDB: Uses event mutexes 2020-04-15T13:11:00.737184Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2020-04-15T13:11:00.737474Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8 2020-04-15T13:11:00.737603Z 0 [Note] InnoDB: Using Linux native AIO 2020-04-15T13:11:00.749952Z 0 [Note] InnoDB: Number of pools: 1 2020-04-15T13:11:00.765989Z 0 [Note] InnoDB: Using CPU crc32 instructions 2020-04-15T13:11:00.852956Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2020-04-15T13:11:01.057858Z 0 [Note] InnoDB: Completed initialization of buffer pool 2020-04-15T13:11:01.106705Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2020-04-15T13:11:01.180264Z 0 [Note] InnoDB: Highest supported file format is Barracuda. 2020-04-15T13:11:01.498252Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2020-04-15T13:11:01.505374Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2020-04-15T13:11:01.724865Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2020-04-15T13:11:01.726379Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active. 2020-04-15T13:11:01.738880Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active. 2020-04-15T13:11:01.739464Z 0 [Note] InnoDB: Waiting for purge to start 2020-04-15T13:11:01.828422Z 0 [Note] InnoDB: 5.7.13 started; log sequence number 12577494 2020-04-15T13:11:01.840995Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 2020-04-15T13:11:01.841477Z 0 [Note] Plugin 'FEDERATED' is disabled. 2020-04-15T13:11:02.394097Z 0 [Note] InnoDB: Buffer pool(s) load completed at 200415 13:11:02 2020-04-15T13:11:02.596792Z 0 [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key 2020-04-15T13:11:02.597011Z 0 [Note] Server hostname (bind-address): '*'; port: 3306 2020-04-15T13:11:02.602675Z 0 [Note] IPv6 is available. 2020-04-15T13:11:02.607794Z 0 [Note] - '::' resolves to '::'; 2020-04-15T13:11:02.608115Z 0 [Note] Server socket created on IP: '::'. 2020-04-15T13:11:02.663550Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:11:02.672863Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:11:02.802841Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-15T13:11:03.356342Z 0 [Note] Event Scheduler: Loaded 0 events 2020-04-15T13:11:03.430803Z 0 [Note] mysqld: ready for connections. Version: '5.7.13-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL) 2020-04-15T14:00:50.453847Z 55 [Note] Aborted connection 55 to db: 'ftgo_restaurant_service' user: 'ftgo_restaurant_service_user' host: '172.18.0.14' (Got an error reading communication packets) 2020-04-15T14:00:50.644494Z 57 [Note] Aborted connection 57 to db: 'ftgo_restaurant_service' user: 'ftgo_restaurant_service_user' host: '172.18.0.14' (Got an error reading communication packets) 2020-04-15T14:00:50.676003Z 52 [Note] Aborted connection 52 to db: 'ftgo_restaurant_service' user: 'ftgo_restaurant_service_user' host: '172.18.0.14' (Got an error reading communication packets) 2020-04-15T14:00:50.676221Z 58 [Note] Aborted connection 58 to db: 'ftgo_restaurant_service' user: 'ftgo_restaurant_service_user' host: '172.18.0.14' (Got an error reading communication packets) 2020-04-15T14:00:50.933107Z 53 [Note] Aborted connection 53 to db: 'ftgo_restaurant_service' user: 'ftgo_restaurant_service_user' host: '172.18.0.14' (Got an error reading communication packets) 2020-04-15T14:00:50.948752Z 60 [Note] Aborted connection 60 to db: 'ftgo_restaurant_service' user: 'ftgo_restaurant_service_user' host: '172.18.0.14' (Got an error reading communication packets) 2020-04-15T14:00:51.026463Z 54 [Note] Aborted connection 54 to db: 'ftgo_restaurant_service' user: 'ftgo_restaurant_service_user' host: '172.18.0.14' (Got an error reading communication packets) 2020-04-15T14:00:51.073940Z 61 [Note] Aborted connection 61 to db: 'ftgo_restaurant_service' user: 'ftgo_restaurant_service_user' host: '172.18.0.14' (Got an error reading communication packets) 2020-04-15T14:00:51.143612Z 56 [Note] Aborted connection 56 to db: 'ftgo_restaurant_service' user: 'ftgo_restaurant_service_user' host: '172.18.0.14' (Got an error reading communication packets) 2020-04-15T14:00:51.255143Z 59 [Note] Aborted connection 59 to db: 'ftgo_restaurant_service' user: 'ftgo_restaurant_service_user' host: '172.18.0.14' (Got an error reading communication packets) 2020-04-15T15:29:15.940918Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 2380488ms. The settings might not be optimal. (flushed=0 and evic ted=0, during the time.) 2020-04-15T16:26:49.996167Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 2784546ms. The settings might not be optimal. (flushed=0 and evic ted=0, during the time.) 2020-04-15T17:14:31.780885Z 112 [Note] Start binlog_dump to master_thread_id(112) slave_server(1), pos(, 4) 2020-04-15T17:38:49.911542Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 8304ms. The settings might not be optimal. (flushed=3 and evicted =0, during the time.) 2020-04-15T17:40:43.732187Z 122 [Note] Aborted connection 122 to db: 'ftgo_order_service' user: 'ftgo_order_service_user' host: '172.18.0.10' (Got an error reading communication packets) 2020-04-15T17:40:43.732202Z 117 [Note] Aborted connection 117 to db: 'ftgo_order_service' user: 'ftgo_order_service_user' host: '172.18.0.10' (Got an error reading communication packets) 2020-04-15T17:40:43.732194Z 119 [Note] Aborted connection 119 to db: 'ftgo_order_service' user: 'ftgo_order_service_user' host: '172.18.0.10' (Got an error reading communication packets) 2020-04-15T17:40:43.732100Z 114 [Note] Aborted connection 114 to db: 'ftgo_order_service' user: 'ftgo_order_service_user' host: '172.18.0.10' (Got an error reading communication packets) 2020-04-15T17:40:43.732139Z 116 [Note] Aborted connection 116 to db: 'ftgo_order_service' user: 'ftgo_order_service_user' host: '172.18.0.10' (Got an error reading communication packets) 2020-04-15T17:40:43.740042Z 115 [Note] Aborted connection 115 to db: 'ftgo_order_service' user: 'ftgo_order_service_user' host: '172.18.0.10' (Got an error reading communication packets) 2020-04-15T17:40:43.736358Z 121 [Note] Aborted connection 121 to db: 'ftgo_order_service' user: 'ftgo_order_service_user' host: '172.18.0.10' (Got an error reading communication packets) 2020-04-15T17:40:43.724979Z 120 [Note] Aborted connection 120 to db: 'ftgo_order_service' user: 'ftgo_order_service_user' host: '172.18.0.10' (Got an error reading communication packets) 2020-04-15T17:40:43.999578Z 118 [Note] Aborted connection 118 to db: 'ftgo_order_service' user: 'ftgo_order_service_user' host: '172.18.0.10' (Got an error reading communication packets) 2020-04-15T17:40:44.041793Z 113 [Note] Aborted connection 113 to db: 'ftgo_order_service' user: 'ftgo_order_service_user' host: '172.18.0.10' (Got an error reading communication packets) 2020-04-18T07:13:52.096003Z 0 [Note] mysqld (mysqld 5.7.13-log) starting as process 1 ... 2020-04-18T07:13:52.224916Z 0 [Note] InnoDB: PUNCH HOLE support available 2020-04-18T07:13:52.225073Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2020-04-18T07:13:52.225117Z 0 [Note] InnoDB: Uses event mutexes 2020-04-18T07:13:52.225150Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2020-04-18T07:13:52.225182Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8 2020-04-18T07:13:52.225214Z 0 [Note] InnoDB: Using Linux native AIO 2020-04-18T07:13:52.308628Z 0 [Note] InnoDB: Number of pools: 1 2020-04-18T07:13:52.386475Z 0 [Note] InnoDB: Using CPU crc32 instructions 2020-04-18T07:13:52.722741Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2020-04-18T07:13:52.829039Z 0 [Note] InnoDB: Completed initialization of buffer pool 2020-04-18T07:13:52.928039Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2020-04-18T07:13:53.084483Z 0 [Note] InnoDB: Highest supported file format is Barracuda. 2020-04-18T07:13:53.174920Z 0 [Note] InnoDB: Log scan progressed past the checkpoint lsn 12955149 2020-04-18T07:13:53.175152Z 0 [Note] InnoDB: Doing recovery: scanned up to log sequence number 12955593 2020-04-18T07:13:53.175429Z 0 [Note] InnoDB: Doing recovery: scanned up to log sequence number 12955593 2020-04-18T07:13:53.248180Z 0 [Note] InnoDB: Database was not shutdown normally! 2020-04-18T07:13:53.248915Z 0 [Note] InnoDB: Starting crash recovery. 2020-04-18T07:13:54.130761Z 0 [Note] InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percent: 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 2020-04-18T07:13:54.140234Z 0 [Note] InnoDB: Apply batch completed 2020-04-18T07:13:54.144374Z 0 [Note] InnoDB: Last MySQL binlog file position 0 214200, file name mysql-bin.000003 2020-04-18T07:13:56.396950Z 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1" 2020-04-18T07:13:56.397085Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2020-04-18T07:13:56.397196Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2020-04-18T07:13:56.530420Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2020-04-18T07:13:56.537638Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active. 2020-04-18T07:13:56.539102Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active. 2020-04-18T07:13:56.539656Z 0 [Note] InnoDB: Waiting for purge to start 2020-04-18T07:13:56.595680Z 0 [Note] InnoDB: 5.7.13 started; log sequence number 12955593 2020-04-18T07:13:56.615211Z 0 [Note] Plugin 'FEDERATED' is disabled. 2020-04-18T07:13:56.675438Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 2020-04-18T07:13:56.864487Z 0 [Note] InnoDB: Buffer pool(s) load completed at 200418 7:13:56 2020-04-18T07:13:56.886345Z 0 [Note] Recovering after a crash using mysql-bin 2020-04-18T07:13:57.155669Z 0 [Note] Starting crash recovery... 2020-04-18T07:13:57.158952Z 0 [Note] Crash recovery finished. 2020-04-18T07:13:57.448942Z 0 [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key 2020-04-18T07:13:57.449311Z 0 [Note] Server hostname (bind-address): '*'; port: 3306 2020-04-18T07:13:57.449468Z 0 [Note] IPv6 is available. 2020-04-18T07:13:57.449527Z 0 [Note] - '::' resolves to '::'; 2020-04-18T07:13:57.449605Z 0 [Note] Server socket created on IP: '::'. 2020-04-18T07:13:57.720495Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-18T07:13:57.722192Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode. 2020-04-18T07:13:58.360160Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode. 2020-04-18T07:14:02.644486Z 0 [Note] Event Scheduler: Loaded 0 events 2020-04-18T07:14:02.681781Z 0 [Note] mysqld: ready for connections. Version: '5.7.13-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL) 2020-04-18T08:37:09.388117Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 1612449ms. The settings might not be optimal. (flushed=0 and evic ted=0, during the time.)
D:\intelliJWorkspaces\manningWorkspace\ftgo-application>
Not quite sure what is going wrong. The logs seem fine.
One question: does docker ps
show the cdc service as healthy after it starts?
Also, please
git checkout wip-use-gradle-docker-compose
./gradlew compileAll assemble
./gradlew :composeUp
I am curious to see the output of the last command
Tried docker-compose up -d cdc-service. and then tried 'docker ps' after one minute. Output below D:\intelliJWorkspaces\manningWorkspace\ftgo-application>docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4c1026381ea0 eventuateio/eventuate-cdc-service:0.4.0.RELEASE "/bin/sh -c 'java ${" 4 minutes ago Up 2 minutes (unhealthy) 0.0.0.0:8 099->8080/tcp ftgo-application_cdc-service_1 e441177c9455 eventuateio/eventuate-kafka:0.3.0.RELEASE "/bin/bash -c ./run-" 5 minutes ago Up 2 minutes 0.0.0.0:9 092->9092/tcp ftgo-application_kafka_1 8b92b6a275e5 eventuateio/eventuate-zookeeper:0.4.0.RELEASE "/usr/local/zookeepe" 5 minutes ago Up 2 minutes 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp ftgo-application_zookeeper_1 7b6d9f6411f0 ftgo-application-master_dynamodblocal-init "/bin/sh -c './wait-" 6 days ago Up 18 seconds (healthy) ftgo-application-master_dynamodblocal-init_1
D:\intelliJWorkspaces\manningWorkspace\ftgo-application>
Since the cdc-service is showing unhealthy I obtained it's container log below. -- cdc-service container log start --
D:\intelliJWorkspaces\manningWorkspace\ftgo-application>docker logs 4c1026381ea0 #######
####### ## ###### # # # #### # # # ######
14:43:11.670 [main] INFO o.s.b.w.e.tomcat.TomcatWebServer - Tomcat initialized with port(s): 8080 (http) 14:43:11.911 [main] INFO o.s.web.context.ContextLoader - Root WebApplicationContext: initialization completed in 5548 ms 14:43:13.918 [main] INFO o.s.s.c.ThreadPoolTaskExecutor - Initializing ExecutorService 'applicationTaskExecutor' 14:43:14.148 [main] WARN o.a.c.retry.ExponentialBackoffRetry - maxRetries too large (2147483647). Pinning to 29 14:43:14.584 [main] INFO o.a.c.f.imps.CuratorFrameworkImpl - Starting 14:43:14.695 [main-EventThread] INFO o.a.c.f.state.ConnectionStateManager - State change: CONNECTED 14:43:14.787 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:14.852 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = producer-1 ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:14.855 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1 14:43:14.855 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5 14:43:15.118 [main] INFO i.e.l.u.c.p.CdcPipelineConfigurator - Starting unified cdc pipelines 14:43:15.376 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.377 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.387 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = producer-2 ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.388 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1 14:43:15.388 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5 14:43:15.388 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.388 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.390 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.398 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = producer-3 ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.399 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1 14:43:15.399 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5 14:43:15.399 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.400 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.401 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.407 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = producer-4 ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.408 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1 14:43:15.408 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5 14:43:15.409 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.409 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.410 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.414 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = producer-5 ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.414 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1 14:43:15.415 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5 14:43:15.415 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.415 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.416 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.422 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = producer-6 ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.423 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1 14:43:15.424 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5 14:43:15.424 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.424 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.426 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.440 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = producer-7 ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.441 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1 14:43:15.441 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5 14:43:15.441 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.442 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.442 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.446 [main] INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS sasl.mechanism = GSSAPI max.block.ms = 60000 interceptor.classes = null ssl.truststore.password = null client.id = producer-8 ssl.endpoint.identification.algorithm = null request.timeout.ms = 30000 acks = all receive.buffer.bytes = 32768 ssl.truststore.type = JKS retries = 0 ssl.truststore.location = null ssl.keystore.password = null send.buffer.bytes = 131072 compression.type = none metadata.fetch.timeout.ms = 60000 retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit buffer.memory = 33554432 timeout.ms = 30000 key.serializer = class org.apache.kafka.common.serialization.StringSerializer sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX block.on.buffer.full = false ssl.key.password = null sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 max.in.flight.requests.per.connection = 5 metrics.num.samples = 2 ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] batch.size = 16384 ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT max.request.size = 1048576 value.serializer = class org.apache.kafka.common.serialization.StringSerializer ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner linger.ms = 1
14:43:15.446 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1 14:43:15.446 [main] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5 14:43:15.447 [main] DEBUG i.e.local.common.CdcDataPublisher - Starting CdcDataPublisher 14:43:15.489 [main] INFO i.e.l.u.c.p.CdcPipelineConfigurator - Unified cdc pipelines are started 14:43:15.551 [Curator-LeaderSelector-0] WARN org.apache.curator.utils.ZKPaths - The version of ZooKeeper being used doesn't support Container nodes. CreateMode.PERSISTENT will be used instead. 14:43:15.591 [Curator-LeaderSelector-0] INFO i.e.l.m.binlog.MySqlBinaryLogClient - mysql binlog client started 14:43:15.725 [Curator-LeaderSelector-0] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor] reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 max.partition.fetch.bytes = 1048576 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS enable.auto.commit = false sasl.mechanism = GSSAPI interceptor.classes = null exclude.internal.topics = true ssl.truststore.password = null client.id = ssl.endpoint.identification.algorithm = null max.poll.records = 2147483647 check.crcs = true request.timeout.ms = 40000 heartbeat.interval.ms = 3000 auto.commit.interval.ms = 1000 receive.buffer.bytes = 65536 ssl.truststore.type = JKS ssl.truststore.location = null ssl.keystore.password = null fetch.min.bytes = 1 send.buffer.bytes = 131072 value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer group.id = c3de8eb3-d26b-4783-9604-4583929bcd1f retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX ssl.key.password = null fetch.max.wait.ms = 500 sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 session.timeout.ms = 30000 metrics.num.samples = 2 key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 auto.offset.reset = earliest
14:43:15.775 [Curator-LeaderSelector-0] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values: metric.reporters = [] metadata.max.age.ms = 300000 partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor] reconnect.backoff.ms = 50 sasl.kerberos.ticket.renew.window.factor = 0.8 max.partition.fetch.bytes = 1048576 bootstrap.servers = [kafka:9092] ssl.keystore.type = JKS enable.auto.commit = false sasl.mechanism = GSSAPI interceptor.classes = null exclude.internal.topics = true ssl.truststore.password = null client.id = consumer-1 ssl.endpoint.identification.algorithm = null max.poll.records = 2147483647 check.crcs = true request.timeout.ms = 40000 heartbeat.interval.ms = 3000 auto.commit.interval.ms = 1000 receive.buffer.bytes = 65536 ssl.truststore.type = JKS ssl.truststore.location = null ssl.keystore.password = null fetch.min.bytes = 1 send.buffer.bytes = 131072 value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer group.id = c3de8eb3-d26b-4783-9604-4583929bcd1f retry.backoff.ms = 100 sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 ssl.trustmanager.algorithm = PKIX ssl.key.password = null fetch.max.wait.ms = 500 sasl.kerberos.min.time.before.relogin = 60000 connections.max.idle.ms = 540000 session.timeout.ms = 30000 metrics.num.samples = 2 key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer ssl.protocol = TLS ssl.provider = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.keystore.location = null ssl.cipher.suites = null security.protocol = PLAINTEXT ssl.keymanager.algorithm = SunX509 metrics.sample.window.ms = 30000 auto.offset.reset = earliest
14:43:15.920 [Curator-LeaderSelector-0] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.1 14:43:15.920 [Curator-LeaderSelector-0] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : a7a17cdec9eaa6c5 14:43:16.059 [main] INFO o.s.b.a.e.web.EndpointLinksResolver - Exposing 2 endpoint(s) beneath base path '/actuator' 14:43:16.249 [main] INFO o.s.b.w.e.tomcat.TomcatWebServer - Tomcat started on port(s): 8080 (http) with context path '' 14:43:17.283 [Timer-0] ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization. com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:988)
at com.mysql.jdbc.MysqlIO.
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. - retrying in 500 mil liseconds org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is com.mysql.jdbc.exceptions.jdbc4.Commu nicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:81) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:612) at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:862) at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:917) at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:927) at io.eventuate.local.common.CdcMonitoringDao.lambda$update$0(CdcMonitoringDao.java:36) at io.eventuate.local.common.DaoUtils.handleConnectionLost(DaoUtils.java:22) at io.eventuate.local.common.DaoUtils.handleConnectionLost(DaoUtils.java:51) at io.eventuate.local.common.CdcMonitoringDao.update(CdcMonitoringDao.java:32) at io.eventuate.local.db.log.common.DbLogMetrics$1.run(DbLogMetrics.java:85) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:988)
at com.mysql.jdbc.MysqlIO.
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:988)
at com.mysql.jdbc.MysqlIO.
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. - retrying in 500 mil liseconds org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is com.mysql.jdbc.exceptions.jdbc4.Commu nicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:81) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:612) at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:862) at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:917) at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:927) at io.eventuate.local.common.CdcMonitoringDao.lambda$update$0(CdcMonitoringDao.java:36) at io.eventuate.local.common.DaoUtils.handleConnectionLost(DaoUtils.java:22) at io.eventuate.local.common.DaoUtils.handleConnectionLost(DaoUtils.java:51) at io.eventuate.local.common.CdcMonitoringDao.update(CdcMonitoringDao.java:32) at io.eventuate.local.db.log.common.DbLogMetrics$1.run(DbLogMetrics.java:85) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:988)
at com.mysql.jdbc.MysqlIO.
D:\intelliJWorkspaces\manningWorkspace\ftgo-application> -- cdc-service container log ends --
--output of 1. git checkout wip-use-gradle-docker-compose -- Lenovo@PRAKASH-LENOVO MINGW64 /d/intelliJWorkspaces/manningWorkspace/ftgo-application (master) $ git checkout wip-use-gradle-docker-compose error: pathspec 'wip-use-gradle-docker-compose' did not match any file(s) known to git
Lenovo@PRAKASH-LENOVO MINGW64 /d/intelliJWorkspaces/manningWorkspace/ftgo-application (master) $
--output of 2. ./gradlew compileAll assemble -- Lenovo@PRAKASH-LENOVO MINGW64 /d/intelliJWorkspaces/manningWorkspace/ftgo-application (master) $ ./gradlew compileAll assemble Starting a Gradle Daemon (subsequent builds will be faster)
Task :buildSrc:compileJava NO-SOURCE Task :buildSrc:compileGroovy UP-TO-DATE Task :buildSrc:processResources NO-SOURCE Task :buildSrc:classes UP-TO-DATE Task :buildSrc:jar UP-TO-DATE Task :buildSrc:assemble UP-TO-DATE Task :buildSrc:compileTestJava NO-SOURCE Task :buildSrc:compileTestGroovy NO-SOURCE Task :buildSrc:processTestResources NO-SOURCE Task :buildSrc:testClasses UP-TO-DATE Task :buildSrc:test SKIPPED Task :buildSrc:check UP-TO-DATE Task :buildSrc:build UP-TO-DATE
FAILURE: Build failed with an exception.
What went wrong: Task 'compileAll' not found in root project 'ftgo-application'. Some candidates are: 'compileJava'.
Try: Run gradlew tasks to get a list of available tasks. Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with Gradle 5.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/4.10.2/userguide/command_line_interface.html#sec:command_line_warnings
BUILD FAILED in 29s
Lenovo@PRAKASH-LENOVO MINGW64 /d/intelliJWorkspaces/manningWorkspace/ftgo-application (master)
--output of ./gradlew :composeUp -- Lenovo@PRAKASH-LENOVO MINGW64 /d/intelliJWorkspaces/manningWorkspace/ftgo-application (master) $ ./gradlew :composeUp
Task :buildSrc:compileJava NO-SOURCE Task :buildSrc:compileGroovy UP-TO-DATE Task :buildSrc:processResources NO-SOURCE Task :buildSrc:classes UP-TO-DATE Task :buildSrc:jar UP-TO-DATE Task :buildSrc:assemble UP-TO-DATE Task :buildSrc:compileTestJava NO-SOURCE Task :buildSrc:compileTestGroovy NO-SOURCE Task :buildSrc:processTestResources NO-SOURCE Task :buildSrc:testClasses UP-TO-DATE Task :buildSrc:test SKIPPED Task :buildSrc:check UP-TO-DATE Task :buildSrc:build UP-TO-DATE
FAILURE: Build failed with an exception.
What went wrong: Task 'composeUp' not found in root project 'ftgo-application'.
Try: Run gradlew tasks to get a list of available tasks. Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with Gradle 5.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/4.10.2/userguide/command_line_interface.html#sec:command_line_warnings
BUILD FAILED in 1s
Lenovo@PRAKASH-LENOVO MINGW64 /d/intelliJWorkspaces/manningWorkspace/ftgo-application (master) $
Please reply.
Thanks Prakash S.
Please look carefully for errors
The checkout failed: error: pathspec 'wip-use-gradle-docker-compose' did not match any file(s) known to git
git checkout wip-use-gradle-docker-compose --
Lenovo@PRAKASH-LENOVO MINGW64 /d/intelliJWorkspaces/manningWorkspace/ftgo-application (master)
$ git checkout wip-use-gradle-docker-compose
error: pathspec 'wip-use-gradle-docker-compose' did not match any file(s) known to git
Lenovo@PRAKASH-LENOVO MINGW64 /d/intelliJWorkspaces/manningWorkspace/ftgo-application (master)
I can see that there is no file named wip-use-gradle-docker-compose under ftgo-application folder on github. So will the command 'git checkout' ever succeed?
And what about the other two commands('gradlew compileAll assemble' and 'gradlew :composeUp') you have provided? These are also failing. You did not comment on these.
And I've provided the container log of cdc-service. In which there is an exception trace 14:43:17.285 [Timer-0] ERROR io.eventuate.local.common.DaoUtils - Could not access database Failed to obtain JDBC Connection; nested exception is com. mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure Caused by: java.net.UnknownHostException: mysql: unknown error
Let me know how can I proceed further.
See this branch https://github.com/microservices-patterns/ftgo-application/tree/wip-use-gradle-docker-compose
You need to do a
git fetch
git checkout wip-use-gradle-docker-compose
These commands are failing because the first command to checkout the branch that implements these commands failed:
And what about the other two commands('gradlew compileAll assemble' and 'gradlew :composeUp') you have provided? These are also failing. You did not comment on these.
regarding
mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
Caused by: java.net.UnknownHostException: mysql: unknown error
For some reason, mysql is not running. Also, The mysql container is not showing in the output of docker ps
. For some reason, it has crashed.
It would be helpful if any logs in the issue were links to gists (https://gist.github.com/). This issue has grown so long that its difficult to read.
Ok. I'm now working on the mentioned branch(wip-use-gradle-docker-compose) instead of master. After giving following commands
gradlew buildContracts gradlew compileAll assemble gradlew :composeUp This tries to bring up all the services into healthy state one by one but observed that it's taking to long(more than 10 mins and yet not completed). I got some meaningful info in consumer-service containers log where I had stopped(Ctrl+C while running gradlew :composeUp) as it is found unhealthy even after waiting for 10 mins. It shows SQLExcpetion due to access denied to ftgo-consumer-service-user
java.sql.SQLException: Access denied for user 'ftgo_consumer_service_user'@'172.19.0.14' (using password: YES) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965) ~[mysql-connector-java-5.1.46.jar!/:5.1.46]
The full log attached in the gist.github.com
This contains consumer-service containers log as well as mysql containers log. I noticed mysql containers log showing a Warning:
Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
Could 'not having SSL certificate' be the reason of access denied to all ftgo services users? Please reply.
Earlier I did not know the gist feature for posting logs hence included online. Sorry for that.
Tried to ping 172.19.0.14. No reply got. Why so?
Ping statistics for 172.19.0.14: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
Thanks Prakash S.
Hey Chris.
I've ran docker-compose up -d kafka docker-compose up -d cdc-service docker-compose up -d ftgo-order-service
and then after 10 mins I'm seeing ftgo-oder-service exited but mysql service is still up. I've provided the output of 'docker ps' and 'docker ps -a' along with new log of mysql container in the same gist.github.com
According to new log obtained on 2020-04-24 the 'docker ps' shows below line in the output
2154ee5c3d4b ftgo-application_mysql "docker-entrypoint.s" 2 days ago Up 13 minutes 0. 0.0.0:3306->3306/tcp ftgo-application_mysql_1
And at the time of actually writing this it is showing mysql service up for past 32 mins.
Added mysql containers log in the same gist. This has below lines in the log
2020-04-24T13:27:22.398364Z 0 [Note] InnoDB: Buffer pool(s) load completed at 200424 13:27:22 2020-04-24T13:27:22.421534Z 0 [Note] Recovering after a crash using mysql-bin 2020-04-24T13:27:22.518679Z 0 [Note] Starting crash recovery... 2020-04-24T13:27:22.518735Z 0 [Note] Crash recovery finished. 2020-04-24T13:27:22.569098Z 0 [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
This means yes mysql was crashed but Crash recovery was also done. I do not know why it was crashed. Probably you can suggest something.
At the time of writing this cdc-service and mysql service both are showing in 'docker ps' but not 'order-service'
Thanks Prakash S.
Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
Could 'not having SSL certificate' be the reason of access denied to all ftgo services users? Please reply.
No.
Earlier I did not know the gist feature for posting logs hence included online. Sorry for that.
Tried to ping 172.19.0.14. No reply got. Why so?
Docker networking probably doesn''t allow that.
Sadly, I'm not sure why it's not working on your machine.
Please just use the wip-use-gradle-docker-compose
branch to debug this problem.
First, run this:
./gradlew :composeUp -P startedService=mysql
Assuming that the directory is ftgo-application
run this command:
docker exec -it ftgo-application_mysql_1 bash
you should get a #
prompt
Run this command
mysql -hlocalhost -uroot -prootpassword
you should get a mysql>
prompt
Run this command
show databases;
This is what I get:
mysql> show databases;
+-------------------------+
| Database |
+-------------------------+
| information_schema |
| eventuate |
| ftgo_accounting_service |
| ftgo_consumer_service |
| ftgo_kitchen_service |
| ftgo_order_service |
| ftgo_restaurant_service |
| mysql |
| performance_schema |
| sys |
+-------------------------+
10 rows in set (0.00 sec)
Chris.
Yes. I'm working on the mentioned branch only. The output of very first command says 'Connection Refused'.
D:\intelliJWorkspaces\manningWorkspace\wip-use-gradle-docker-compose-Branch\ftgo-application>gradlew :composeUp -P startedService=mysql
Task :composeUp Building mysql Creating network "ftgo-application_default" with the default driver Creating ftgo-application_mysql_1 ... done <-------------> 0% EXECUTING [8s] DOCKER_HOST environment variable detected - will be used as hostname of service mysql (192.168.99.100)' Probing TCP socket on 192.168.99.100:3306 of service 'mysql_1' Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) Waiting for TCP socket on 192.168.99.100:3306 of service 'mysql_1' (Connection refused: connect) <-------------> 0% EXECUTING [1m 11s] :composeUp Terminate batch job (Y/N)? y
I've mysql server installed on my PC. Do I need to run mysqld before I run gradlew :composeUp?
Thanks Prakash S.
The example is starting its own copy of MySQL as a Docker container.
gradlew :composeUp -P startedService=mysql
can't connect.
I assume that 192.168.99.100
is still the IP address of your Docker Toolbox VM.
What's the output of docker ps -a
and docker logs ftgo-application_mysql_1
-- output shows my Docker ToolBox is still using 192.168.99.100 as DOCKER_HOST_IP. Also the DOCKER_HOST_IP is verified by the following command -- 'docker run -p 8889:8888 -e DOCKER_DIAGNOSTICS_PORT=8889 -e DOCKER_HOST_IP --rm eventuateio/eventuateio-docker-networking-diagnostics:0.2.0.RELEASE' This displays following line on console Making HTTP request to self via url= http://192.168.99.100:8889 SUCCESSS!!!!
But the problem continues. Connection still refused. Could this be a problem because older version (5.7.x) of mysql server? Or could it be JDBC driver problem? I do not know but thought of confirming as existing 'mysql-8.0.13-winx64' version running perfectly without any problems on my PC(I did not test it any microservices app or docker). But I feel 5.7.x should also work fine.
The command could not bring up mysql service up. 'gradlew :composeUp -P startedService=mysql ' Teminated using Ctrl+C. and then 'docker ps -a' shows mysql service exited. New log of mysql container added in same link gist
Thanks Prakash S.
Regarding
But the problem continues. Connection still refused.
Could this be a problem because older version (5.7.x) of mysql server? Or could it be JDBC driver problem? I do not know but thought of confirming as existing 'mysql-8.0.13-winx64' version running perfectly without any problems on my PC(I did not test it any microservices app or docker). But I feel 5.7.x should also work fine.
What's running on your machine is quite separate that what's running in Docker containers. What's more the FTGO example application runs without issue on numerous machine. The challenge is figuring out why it is not working for you.
This looks like the problem - see end of MySQL log:
usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/4.compile-schema-per-service.sh
/docker-entrypoint-initdb.d/4.compile-schema-per-service.sh: line 2: $'\r': command not found
The MySQL server is failing to start because of what looks like a Windows vs. Linux line ending problem.
I think what has happened is that git clone checked out files with line endings suitable for your windows machine. However, this is a problem for mysql/*.sh
since they execute in a Linux Docker container.
Can you try converting the line ends of those files to Linux-style newline. https://stackoverflow.com/questions/20368781/anything-like-dos2unix-for-windows
I just pushed a change to the wip-use-gradle-docker-compose
branch
git pull
git add --renormalize mysql/*.sh dynamodblocal-init/*.sh
I think the git add --renormalize
should fix the files.
Please try running MySql again: https://github.com/microservices-patterns/ftgo-application/issues/85#issuecomment-619120279
Chris.
As I know you've pushed the necessary change into 'wip-use-gradle-docker-compose' branch. I just took latest of the mentioned branch using following command
git clone -b wip-use-gradle-docker-compose --single-branch https://github.com/microservices-patterns/ftgo-application.git
and did the build process again and finally did composeUp on mysql service and I could see that I can login to mysql using # prompt and see databases that you have mentioned. Finally I could bring all docker processes up using gradlew: composeUp and I can see the swagger UIs now. Thanks alot for helping the example project run successfully on my PC. I'm quickly summarizing below the list of commands that I used for seeing the ftgo-application running. This may be helpful for others.
-- sample output of 'gradlew :composeUp' -- D:\intelliJWorkspaces\manningWorkspace\wip-use-gradle-docker-compose-Branch\ftgo-application>gradlew :composeUp Starting a Gradle Daemon, 1 stopped Daemon could not be reused, use --status for details
Task :composeUp zookeeper uses an image, skipping kafka uses an image, skipping cdc-service uses an image, skipping zipkin uses an image, skipping Building mysql Building ftgo-consumer-service Building ftgo-kitchen-service Building ftgo-restaurant-service Building ftgo-accounting-service Building ftgo-api-gateway Building ftgo-order-service Building dynamodblocal Building dynamodblocal-init Building ftgo-order-history-service Creating network "ftgo-application_default" with the default driver Creating ftgo-application_ftgo-api-gateway_1 ... done Creating ftgo-application_mysql_1 ... done Creating ftgo-application_zookeeper_1 ... done Creating ftgo-application_dynamodblocal_1 ... done Creating ftgo-application_zipkin_1 ... done C Creating ftgo-application_kafka_1 ... done Creating ftgo-application_dynamodblocal-init_1 ... done Creating ftgo-application_cdc-service_1 ... done Creating ftgo-application_ftgo-accounting-service_1 ... done Creating ftgo-application_ftgo-order-service_1 ... done Creating ftgo-application_ftgo-restaurant-service_1 ... done Creating ftgo-application_ftgo-consumer-service_1 ... done Creating ftgo-application_ftgo-kitchen-service_1 ... done Creating ftgo-application_ftgo-order-history-service_1 ... done D OCKER_HOST environment variable detected - will be used as hostname of service zookeeper (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service kafka (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service mysql (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service cdc-service (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service ftgo-consumer-service (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service ftgo-kitchen-service (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service ftgo-restaurant-service (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service ftgo-accounting-service (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service ftgo-api-gateway (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service zipkin (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service ftgo-order-service (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service dynamodblocal (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service dynamodblocal-init (192.168.99.100)' DOCKER_HOST environment variable detected - will be used as hostname of service ftgo-order-history-service (192.168.99.100)' Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) Waiting for cdc-service_1 to become healthy (it's unhealthy) cdc-service_1 health state reported as 'healthy' - continuing... Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) Waiting for ftgo-consumer-service_1 to become healthy (it's unhealthy) ftgo-consumer-service_1 health state reported as 'healthy' - continuing... Waiting for ftgo-kitchen-service_1 to become healthy (it's unhealthy) ftgo-kitchen-service_1 health state reported as 'healthy' - continuing... ftgo-restaurant-service_1 health state reported as 'healthy' - continuing... ftgo-accounting-service_1 health state reported as 'healthy' - continuing... ftgo-api-gateway_1 health state reported as 'healthy' - continuing... Waiting for ftgo-order-service_1 to become healthy (it's unhealthy) Waiting for ftgo-order-service_1 to become healthy (it's unhealthy) ftgo-order-service_1 health state reported as 'healthy' - continuing... dynamodblocal_1 health state reported as 'healthy' - continuing... dynamodblocal-init_1 health state reported as 'healthy' - continuing... ftgo-order-history-service_1 health state reported as 'healthy' - continuing... Probing TCP socket on 192.168.99.100:2181 of service 'zookeeper_1' TCP socket on 192.168.99.100:2181 of service 'zookeeper_1' is ready Probing TCP socket on 192.168.99.100:9092 of service 'kafka_1' TCP socket on 192.168.99.100:9092 of service 'kafka_1' is ready Probing TCP socket on 192.168.99.100:3306 of service 'mysql_1' TCP socket on 192.168.99.100:3306 of service 'mysql_1' is ready Probing TCP socket on 192.168.99.100:8099 of service 'cdc-service_1' TCP socket on 192.168.99.100:8099 of service 'cdc-service_1' is ready Probing TCP socket on 192.168.99.100:8081 of service 'ftgo-consumer-service_1' TCP socket on 192.168.99.100:8081 of service 'ftgo-consumer-service_1' is ready Probing TCP socket on 192.168.99.100:8083 of service 'ftgo-kitchen-service_1' TCP socket on 192.168.99.100:8083 of service 'ftgo-kitchen-service_1' is ready Probing TCP socket on 192.168.99.100:8084 of service 'ftgo-restaurant-service_1' TCP socket on 192.168.99.100:8084 of service 'ftgo-restaurant-service_1' is ready Probing TCP socket on 192.168.99.100:8085 of service 'ftgo-accounting-service_1' TCP socket on 192.168.99.100:8085 of service 'ftgo-accounting-service_1' is ready Probing TCP socket on 192.168.99.100:8087 of service 'ftgo-api-gateway_1' TCP socket on 192.168.99.100:8087 of service 'ftgo-api-gateway_1' is ready Probing TCP socket on 192.168.99.100:9411 of service 'zipkin_1' TCP socket on 192.168.99.100:9411 of service 'zipkin_1' is ready Probing TCP socket on 192.168.99.100:8082 of service 'ftgo-order-service_1' TCP socket on 192.168.99.100:8082 of service 'ftgo-order-service_1' is ready Probing TCP socket on 192.168.99.100:8000 of service 'dynamodblocal_1' TCP socket on 192.168.99.100:8000 of service 'dynamodblocal_1' is ready Probing TCP socket on 192.168.99.100:8086 of service 'ftgo-order-history-service_1' TCP socket on 192.168.99.100:8086 of service 'ftgo-order-history-service_1' is ready
Deprecated Gradle features were used in this build, making it incompatible with Gradle 5.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/4.10.2/userguide/command_line_interface.html#sec:command_line_warnings
BUILD SUCCESSFUL in 5m 0s 1 actionable task: 1 executed D:\intelliJWorkspaces\manningWorkspace\wip-use-gradle-docker-compose-Branch\ftgo-application>
Thanking you. Prakash S. Mumbai, India
Chris.
One last question. How to specify price of menu item in 'Create Restaurant' -- Please see if following Request body is proper -- { "menu": { "menuItems": [ { "id": "101S", "name": "Burger", "price": {10} } ] }, "name": "Prashant Snacks Corner" }
I'm getting following Error related to 'net.chrisrichardson.ftgo.common.Money' object.
{
"timestamp": 1587825240203,
"status": 400,
"error": "Bad Request",
"message": "JSON parse error: Cannot deserialize instance of net.chrisrichardson.ftgo.common.Money
out of START_OBJECT token; nested exception is com.fasterxml.jackson.databind.JsonMappingException: Cannot deserialize instance of net.chrisrichardson.ftgo.common.Money
out of START_OBJECT token\n at [Source: (PushbackInputStream); line: 7, column: 18] (through reference chain: net.chrisrichardson.ftgo.restaurantservice.events.CreateRestaurantRequest[\"menu\"]->net.chrisrichardson.ftgo.restaurantservice.events.RestaurantMenu[\"menuItems\"]->java.util.ArrayList[0]->net.chrisrichardson.ftgo.restaurantservice.events.MenuItem[\"price\"])",
"path": "/restaurants"
}
Thanks Prakash S.
Forgotten to attach screen shot. Attaching now
{
"menu": {
"menuItems": [
{
"id": "1",
"name": "Chicken Tika",
"price": "10.00"
}
]
},
"name": "Ajanta"
}
Chris.
Above request body worked fine. I'm now trying another service that's 'Revise Order'. What could be a sample request body for same? What can I specify in place of 'additionalProp1', 'additionalProp2' etc.? Does this operation revises total order price?
{ "revisedLineItemQuantities": { "additionalProp1": 0, "additionalProp2": 0, "additionalProp3": 0 } }
Thanks Prakash S.
I'll investigate but you have access to the source code!
Hi.
Today I correctly followed the instructions given in README.adoc step by step. I'm trying to run the application on my local laptop i.e. on windows. Everything worked fine except some deprecation warnings. The last command I entered as below. This gave following output
ftgo-application-master> docker-compose up -d . . Creating ftgo-application-master_ftgo-api-gateway_1 ... done Creating ftgo-application-master_mysql_1 ... done Creating ftgo-application-master_zookeeper_1 ... done Creating ftgo-application-master_dynamodblocal_1 ... done Creating ftgo-application-master_zipkin_1 ... done Creating ftgo-application-master_dynamodblocal-init_1 ... done Creating ftgo-application-master_kafka_1 ... done Creating ftgo-application-master_cdc-service_1 ... done Creating ftgo-application-master_ftgo-accounting-service_1 ... done Creating ftgo-application-master_ftgo-order-service_1 ... done Creating ftgo-application-master_ftgo-restaurant-service_1 ... done Creating ftgo-application-master_ftgo-consumer-service_1 ... done Creating ftgo-application-master_ftgo-kitchen-service_1 ... done Creating ftgo-application-master_ftgo-order-history-service_1 ... done
ftgo-application-master>
Now as per the README document I tried accessing swagger UIs. But I found none of the swagger UIs are accessible. It simply says "This site can’t be reached. 192.168.99.100 refused to connect."(in my case DOCKER_HOST_IP is 192.168.99.100). I don't know why this is not working.
There is one more URL given in README.adoc which is for accessing the services via api gateway as below http://${DOCKER_HOST_IP?}:8087
In my case it is http://192.168.99.100:8087/ -This gives a Whitelabel Error Page. This means api gateway is started. Can you let me know how to access the running service via api gateway. I need the full url. Also give a sample json to invoke one service let's say 'Create Consumer'.
Since the Swagger UI's not accessible I'm a bit handicapped and have to rely on api gateway url for accessing the individual service and also the json.
Thanks Prakash S. Mumbai, India