usc-isi-i2 / dig-etl-engine

Download DIG to run on your laptop or server.
http://usc-isi-i2.github.io/dig/
MIT License
101 stars 39 forks source link

./engine.sh up fails on multiple containers #275

Closed mbach04 closed 5 years ago

mbach04 commented 5 years ago

When following the README several containers fail to start leading me to suspect there's a step not documented here. The following containers cascade in failure:

ES, Log output of docker-compose -f docker-compose.yml up:

Starting dig_zookeeper_1       ... done
Starting dig_landmark-portal_1 ... done
Starting dig_landmark-mysql_1  ... done
Starting dig_landmark-rest_1   ... done
Starting dig_dig_etl_engine_1  ... done
Starting dig_digui_1           ... done
Starting dig_sandpaper_1       ... done
Starting dig_kibana_1          ... done
Starting dig_kafka_1           ... done
Starting dig_mydig_ws_1        ... done
Starting dig_logstash_1        ... done
Starting dig_nginx_1           ... done
Attaching to dig_elasticsearch_1, dig_zookeeper_1, dig_landmark-mysql_1, dig_landmark-rest_1, dig_landmark-portal_1, dig_kibana_1, dig_digui_1, dig_sandpaper_1, dig_dig_etl_engine_1, dig_kafka_1, dig_mydig_ws_1, dig_logstash_1, dig_nginx_1
elasticsearch_1    | OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
elasticsearch_1    | Exception in thread "main" 2019-01-17 04:02:14,504 main ERROR No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'log4j2.debug' to show Log4j2 internal initialization logging.
elasticsearch_1    | SettingsException[Failed to load settings from /usr/share/elasticsearch/config/elasticsearch.yml]; nested: AccessDeniedException[/usr/share/elasticsearch/config/elasticsearch.yml];
elasticsearch_1    |    at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:102)
elasticsearch_1    |    at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:75)
elasticsearch_1    |    at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70)
elasticsearch_1    |    at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134)
elasticsearch_1    |    at org.elasticsearch.cli.Command.main(Command.java:90)
elasticsearch_1    |    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91)
elasticsearch_1    |    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84)
elasticsearch_1    | Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/config/elasticsearch.yml
elasticsearch_1    |    at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
elasticsearch_1    |    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
elasticsearch_1    |    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
elasticsearch_1    |    at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
elasticsearch_1    |    at java.nio.file.Files.newByteChannel(Files.java:361)
elasticsearch_1    |    at java.nio.file.Files.newByteChannel(Files.java:407)
elasticsearch_1    |    at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
elasticsearch_1    |    at java.nio.file.Files.newInputStream(Files.java:152)
elasticsearch_1    |    at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1032)
elasticsearch_1    |    at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:100)
elasticsearch_1    |    ... 6 more
dig_elasticsearch_1 exited with code 1
zookeeper_1        | ZooKeeper JMX enabled by default
zookeeper_1        | Using config: /opt/zookeeper-3.4.9/bin/../conf/zoo.cfg
zookeeper_1        | 2019-01-17 04:02:12,703 [myid:] - INFO  [main:QuorumPeerConfig@124] - Reading configuration from: /opt/zookeeper-3.4.9/bin/../conf/zoo.cfg
zookeeper_1        | 2019-01-17 04:02:12,751 [myid:] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
zookeeper_1        | 2019-01-17 04:02:12,752 [myid:] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
zookeeper_1        | 2019-01-17 04:02:12,753 [myid:] - WARN  [main:QuorumPeerMain@113] - Either no config or no quorum defined in config, running  in standalone mode
zookeeper_1        | 2019-01-17 04:02:12,794 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
zookeeper_1        | 2019-01-17 04:02:12,932 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
zookeeper_1        | 2019-01-17 04:02:12,933 [myid:] - INFO  [main:QuorumPeerConfig@124] - Reading configuration from: /opt/zookeeper-3.4.9/bin/../conf/zoo.cfg
zookeeper_1        | 2019-01-17 04:02:12,934 [myid:] - INFO  [main:ZooKeeperServerMain@96] - Starting server
zookeeper_1        | 2019-01-17 04:02:13,032 [myid:] - INFO  [main:Environment@100] - Server environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
zookeeper_1        | 2019-01-17 04:02:13,032 [myid:] - INFO  [main:Environment@100] - Server environment:host.name=2a216d29379a
zookeeper_1        | 2019-01-17 04:02:13,033 [myid:] - INFO  [main:Environment@100] - Server environment:java.version=1.7.0_65
zookeeper_1        | 2019-01-17 04:02:13,033 [myid:] - INFO  [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
zookeeper_1        | 2019-01-17 04:02:13,033 [myid:] - INFO  [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
zookeeper_1        | 2019-01-17 04:02:13,033 [myid:] - INFO  [main:Environment@100] - Server environment:java.class.path=/opt/zookeeper-3.4.9/bin/../build/classes:/opt/zookeeper-3.4.9/bin/../build/lib/*.jar:/opt/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/opt/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.9/bin/../conf:
zookeeper_1        | 2019-01-17 04:02:13,033 [myid:] - INFO  [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
zookeeper_1        | 2019-01-17 04:02:13,033 [myid:] - INFO  [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
zookeeper_1        | 2019-01-17 04:02:13,158 [myid:] - INFO  [main:Environment@100] - Server environment:java.compiler=<NA>
zookeeper_1        | 2019-01-17 04:02:13,158 [myid:] - INFO  [main:Environment@100] - Server environment:os.name=Linux
zookeeper_1        | 2019-01-17 04:02:13,158 [myid:] - INFO  [main:Environment@100] - Server environment:os.arch=amd64
zookeeper_1        | 2019-01-17 04:02:13,158 [myid:] - INFO  [main:Environment@100] - Server environment:os.version=3.10.0-862.el7.x86_64
zookeeper_1        | 2019-01-17 04:02:13,159 [myid:] - INFO  [main:Environment@100] - Server environment:user.name=root
zookeeper_1        | 2019-01-17 04:02:13,159 [myid:] - INFO  [main:Environment@100] - Server environment:user.home=/root
zookeeper_1        | 2019-01-17 04:02:13,159 [myid:] - INFO  [main:Environment@100] - Server environment:user.dir=/opt/zookeeper-3.4.9
zookeeper_1        | 2019-01-17 04:02:13,160 [myid:] - INFO  [main:ZooKeeperServer@815] - tickTime set to 2000
zookeeper_1        | 2019-01-17 04:02:13,160 [myid:] - INFO  [main:ZooKeeperServer@824] - minSessionTimeout set to -1
zookeeper_1        | 2019-01-17 04:02:13,240 [myid:] - INFO  [main:ZooKeeperServer@833] - maxSessionTimeout set to -1
zookeeper_1        | 2019-01-17 04:02:13,326 [myid:] - INFO  [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
landmark-mysql_1   | chown: cannot read directory '/var/lib/mysql/': Permission denied
dig_landmark-mysql_1 exited with code 1
landmark-rest_1    | No wait targets found.
landmark-rest_1    | Waiting for 0 seconds.
landmark-rest_1    | Waiting for 0 seconds.
landmark-rest_1    | INFO:werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
landmark-rest_1    | INFO:werkzeug: * Restarting with stat
landmark-rest_1    | WARNING:werkzeug: * Debugger is active!
landmark-rest_1    | INFO:werkzeug: * Debugger PIN: 210-090-476
landmark-portal_1  | Listening on port 3333
kibana_1           | FATAL CLI ERROR Error: EACCES: permission denied, open '/usr/share/kibana/config/kibana.yml'
kibana_1           |     at Error (native)
kibana_1           |     at Object.fs.openSync (fs.js:642:18)
kibana_1           |     at fs.readFileSync (fs.js:510:33)
kibana_1           |     at files.map.path (/usr/share/kibana/src/cli/serve/read_yaml_config.js:10:78)
kibana_1           |     at Array.map (native)
kibana_1           |     at exports.default (/usr/share/kibana/src/cli/serve/read_yaml_config.js:10:23)
kibana_1           |     at readServerSettings (/usr/share/kibana/src/cli/serve/serve.js:48:51)
kibana_1           |     at getCurrentSettings (/usr/share/kibana/src/cli/serve/serve.js:110:16)
kibana_1           |     at Command.<anonymous> (/usr/share/kibana/src/cli/serve/serve.js:112:24)
kibana_1           |     at next (native)
dig_kibana_1 exited with code 1
digui_1            | 
digui_1            | > dig-ui@3.4.6 start /usr/src/app
digui_1            | > node server/app.js
digui_1            | 
digui_1            | Express server listening on 8080, in production mode
digui_1            | Server config { env: 'production',
digui_1            |   root: '/usr/src/app',
digui_1            |   port: 8080,
digui_1            |   ip: undefined,
digui_1            |   appVersion: '3.4.6',
digui_1            |   auth: false,
digui_1            |   authLoginUrl: undefined,
digui_1            |   authTokenUrl: undefined,
digui_1            |   configEndpoint: 'http://localhost:12497/mydig_projects/',
digui_1            |   configPassword: '',
digui_1            |   configUsername: '',
digui_1            |   databaseType: 'sample',
digui_1            |   defaultProject: undefined,
digui_1            |   downloadImageUrl: 'downloadImage',
digui_1            |   esHost: { host: 'http://localhost:12497/es', apiVersion: '5.0' },
digui_1            |   esHostString: 'http://elasticsearch:9200/',
digui_1            |   hideBulkSearch: false,
digui_1            |   hideCachedPage: false,
digui_1            |   hideDatabaseInfo: false,
digui_1            |   imageServiceConfig: { auth: {}, endpoint: {}, host: {} },
digui_1            |   imageUrlPrefix: '',
digui_1            |   imageUrlSuffix: '',
digui_1            |   logIndexName: 'dig-logs',
digui_1            |   logIndexType: 'log',
digui_1            |   masterOverride: false,
digui_1            |   overrideConfig: undefined,
digui_1            |   overrideSearchEndpoint: undefined,
digui_1            |   pathPrefix: '/dig-ui/',
digui_1            |   prettyDomain: undefined,
digui_1            |   profileIndexName: 'dig-profiles',
digui_1            |   profileIndexType: 'profile',
digui_1            |   resultIcon: 'av:web-asset',
digui_1            |   resultNamePlural: 'Webpages',
digui_1            |   resultNameSingular: 'Webpage',
digui_1            |   resultQueryField: '_id',
digui_1            |   revisionsField: 'url',
digui_1            |   revisionsLabel: 'URL',
digui_1            |   searchConfig: 
digui_1            |    { 'http://sandpaper:9876': 'http://localhost:12497/search/coarse' },
digui_1            |   secret: 'dig memex',
digui_1            |   sendSearchesDirectlyToES: false,
digui_1            |   showEsData: 'true',
digui_1            |   showMultipleDescriptions: false,
digui_1            |   showMultipleTitles: false,
digui_1            |   stateIndexName: 'dig-states',
digui_1            |   stateIndexType: 'state',
digui_1            |   supportEmail: 'support@memex.software',
digui_1            |   tagsEntityEndpoint: 'http://localhost:12497/mydig_projects/PROJECT/tags/TAG/annotations/Ad/annotations',
digui_1            |   tagsExtractionEndpoint: 'http://localhost:12497/mydig_projects/PROJECT/entities/ENTITY_ID/fields/EXTRACTION_FIELD/annotations',
digui_1            |   tagsListEndpoint: 'http://localhost:12497/mydig_projects/PROJECT/tags',
digui_1            |   timestampField: 'timestamp_crawl',
digui_1            |   uidField: 'doc_id',
digui_1            |   userOverride: undefined }
sandpaper_1        | Traceback (most recent call last):
sandpaper_1        |   File "start.py", line 3, in <module>
sandpaper_1        |     main.main(sys.argv[1:])
sandpaper_1        |   File "/root/elasticsearch2/lib/python3.4/site-packages/digsandpaper/main.py", line 54, in main
sandpaper_1        |     config = load_json_file(config_file)
sandpaper_1        |   File "/root/elasticsearch2/lib/python3.4/site-packages/digsandpaper/main.py", line 12, in load_json_file
sandpaper_1        |     rules = json.load(codecs.open(file_name, 'r', 'utf-8'))
sandpaper_1        |   File "/usr/lib/python3.4/codecs.py", line 896, in open
sandpaper_1        |     file = builtins.open(filename, mode, buffering)
sandpaper_1        | PermissionError: [Errno 13] Permission denied: 'config/sandpaper.json'
dig_sandpaper_1 exited with code 1
kafka_1            | waiting for kafka to be ready
kafka_1            | [Configuring] 'log.cleanup.policy' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'replica.fetch.response.max.bytes' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'replica.fetch.max.bytes' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'log.cleaner.enable' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'advertised.port' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'log.cleaner.delete.retention.ms' in '/opt/kafka/config/server.properties'
kafka_1            | Excluding KAFKA_HOME from broker config
kafka_1            | [Configuring] 'advertised.host.name' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'heartbeat.interval.ms' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'num.partitions' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'delete.topic.enable' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'message.max.bytes' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'port' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'auto.create.topics.enable' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'log.retention.check.interval.ms' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'broker.id' in '/opt/kafka/config/server.properties'
kafka_1            | Excluding KAFKA_VERSION from broker config
kafka_1            | [Configuring] 'zookeeper.connect' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'group.max.session.timeout.ms' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'log.cleaner.backoff.ms' in '/opt/kafka/config/server.properties'
kafka_1            | [Configuring] 'log.dirs' in '/opt/kafka/config/server.properties'
mydig_ws_1         | killing daemon process (if exists)
mydig_ws_1         | starting daemon (ETK spaCy)
mydig_ws_1         | done
mydig_ws_1         | Traceback (most recent call last):
mydig_ws_1         |   File "etk_spacy.py", line 8, in <module>
mydig_ws_1         |     from config import config
mydig_ws_1         | ModuleNotFoundError: No module named 'config'
mydig_ws_1         | killing backend process (if exists)
mydig_ws_1         | starting backend
mydig_ws_1         | done
mydig_ws_1         | killing frontend process (if exists)
mydig_ws_1         | starting frontend
mydig_ws_1         | done
logstash_1         | 2019/01/17 04:02:17 error: open /usr/share/logstash/config/logstash.yml: permission denied
nginx_1            | nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied)
nginx_1            | 2019/01/17 04:02:18 [emerg] 8#8: open() "/etc/nginx/conf.d/default.conf" failed (13: Permission denied) in /etc/nginx/nginx.conf:31
dig_logstash_1 exited with code 1
dig_nginx_1 exited with code 1
mydig_ws_1         | Traceback (most recent call last):
mydig_ws_1         |   File "service.py", line 8, in <module>
mydig_ws_1         |     from config import config
mydig_ws_1         | ModuleNotFoundError: No module named 'config'
kafka_1            | [2019-01-17 04:02:19,884] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
mydig_ws_1         | Traceback (most recent call last):
mydig_ws_1         |   File "ws.py", line 1, in <module>
mydig_ws_1         |     from app_base import *
mydig_ws_1         |   File "/app/mydig-webservice/ws/app_base.py", line 32, in <module>
mydig_ws_1         |     from basic_auth import requires_auth, requires_auth_html
mydig_ws_1         |   File "/app/mydig-webservice/ws/basic_auth.py", line 5, in <module>
mydig_ws_1         |     from config import config
mydig_ws_1         | ModuleNotFoundError: No module named 'config'
mydig_ws_1         | serve: Running on port 9881
kafka_1            | [2019-01-17 04:02:21,192] INFO starting (kafka.server.KafkaServer)
kafka_1            | [2019-01-17 04:02:21,193] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
kafka_1            | [2019-01-17 04:02:21,225] INFO [ZooKeeperClient] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
kafka_1            | [2019-01-17 04:02:21,231] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:host.name=df0b53224909 (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:java.version=1.8.0_181 (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/compileScala.mapping:/opt/kafka/bin/../libs/compileScala.mapping.asc:/opt/kafka/bin/../libs/connect-api-2.1.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.1.0.jar:/opt/kafka/bin/../libs/connect-file-2.1.0.jar:/opt/kafka/bin/../libs/connect-json-2.1.0.jar:/opt/kafka/bin/../libs/connect-runtime-2.1.0.jar:/opt/kafka/bin/../libs/connect-transforms-2.1.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b42.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.7.jar:/opt/kafka/bin/../libs/jackson-core-2.9.7.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.7.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.7.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.7.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.7.jar:/opt/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b42.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.27.jar:/opt/kafka/bin/../libs/jersey-common-2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.27.jar:/opt/kafka/bin/../libs/jersey-hk2-2.27.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.27.jar:/opt/kafka/bin/../libs/jersey-server-2.27.jar:/opt/kafka/bin/../libs/jetty-client-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-http-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-io-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-security-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-server-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-util-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-2.1.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-2.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.12-2.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.1.0.jar:/opt/kafka/bin/../libs/kafka-tools-2.1.0.jar:/opt/kafka/bin/../libs/kafka_2.12-2.1.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-2.1.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.5.0.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.4.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.1.0.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.14.2.jar:/opt/kafka/bin/../libs/scala-library-2.12.7.jar:/opt/kafka/bin/../libs/scala-logging_2.12-3.9.0.jar:/opt/kafka/bin/../libs/scala-reflect-2.12.7.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.2.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.13.jar:/opt/kafka/bin/../libs/zstd-jni-1.3.5-4.jar (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:os.version=3.10.0-862.el7.x86_64 (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,232] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,233] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@cd3fee8 (org.apache.zookeeper.ZooKeeper)
kafka_1            | [2019-01-17 04:02:21,253] INFO Opening socket connection to server zookeeper/172.19.0.3:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
kafka_1            | [2019-01-17 04:02:21,258] INFO Socket connection established to zookeeper/172.19.0.3:2181, initiating session (org.apache.zookeeper.ClientCnxn)
zookeeper_1        | 2019-01-17 04:02:21,260 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.19.0.200:46242
kafka_1            | [2019-01-17 04:02:21,265] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
zookeeper_1        | 2019-01-17 04:02:21,275 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /172.19.0.200:46242
zookeeper_1        | 2019-01-17 04:02:21,276 [myid:] - INFO  [SyncThread:0:FileTxnLog@203] - Creating new log file: log.2a
zookeeper_1        | 2019-01-17 04:02:21,292 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@673] - Established session 0x16859f93f640000 with negotiated timeout 6000 for client /172.19.0.200:46242
kafka_1            | [2019-01-17 04:02:21,294] INFO Session establishment complete on server zookeeper/172.19.0.3:2181, sessionid = 0x16859f93f640000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka_1            | [2019-01-17 04:02:21,299] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
zookeeper_1        | 2019-01-17 04:02:21,347 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0x1 zxid:0x2b txntype:-1 reqpath:n/a Error Path:/consumers Error:KeeperErrorCode = NodeExists for /consumers
zookeeper_1        | 2019-01-17 04:02:21,362 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0x2 zxid:0x2c txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
zookeeper_1        | 2019-01-17 04:02:21,364 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0x3 zxid:0x2d txntype:-1 reqpath:n/a Error Path:/brokers/topics Error:KeeperErrorCode = NodeExists for /brokers/topics
zookeeper_1        | 2019-01-17 04:02:21,366 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0x4 zxid:0x2e txntype:-1 reqpath:n/a Error Path:/config/changes Error:KeeperErrorCode = NodeExists for /config/changes
zookeeper_1        | 2019-01-17 04:02:21,368 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0x5 zxid:0x2f txntype:-1 reqpath:n/a Error Path:/admin/delete_topics Error:KeeperErrorCode = NodeExists for /admin/delete_topics
zookeeper_1        | 2019-01-17 04:02:21,370 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0x6 zxid:0x30 txntype:-1 reqpath:n/a Error Path:/brokers/seqid Error:KeeperErrorCode = NodeExists for /brokers/seqid
zookeeper_1        | 2019-01-17 04:02:21,372 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0x7 zxid:0x31 txntype:-1 reqpath:n/a Error Path:/isr_change_notification Error:KeeperErrorCode = NodeExists for /isr_change_notification
zookeeper_1        | 2019-01-17 04:02:21,375 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0x8 zxid:0x32 txntype:-1 reqpath:n/a Error Path:/latest_producer_id_block Error:KeeperErrorCode = NodeExists for /latest_producer_id_block
zookeeper_1        | 2019-01-17 04:02:21,377 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0x9 zxid:0x33 txntype:-1 reqpath:n/a Error Path:/log_dir_event_notification Error:KeeperErrorCode = NodeExists for /log_dir_event_notification
zookeeper_1        | 2019-01-17 04:02:21,379 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0xa zxid:0x34 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
zookeeper_1        | 2019-01-17 04:02:21,381 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0xb zxid:0x35 txntype:-1 reqpath:n/a Error Path:/config/clients Error:KeeperErrorCode = NodeExists for /config/clients
zookeeper_1        | 2019-01-17 04:02:21,383 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0xc zxid:0x36 txntype:-1 reqpath:n/a Error Path:/config/users Error:KeeperErrorCode = NodeExists for /config/users
zookeeper_1        | 2019-01-17 04:02:21,386 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16859f93f640000 type:create cxid:0xd zxid:0x37 txntype:-1 reqpath:n/a Error Path:/config/brokers Error:KeeperErrorCode = NodeExists for /config/brokers
kafka_1            | [2019-01-17 04:02:21,525] INFO Cluster ID = HjKkTFQRT8W3raM-lSrsjA (kafka.server.KafkaServer)
kafka_1            | [2019-01-17 04:02:21,529] WARN No meta.properties file under dir /kafka/kafka-logs-df0b53224909/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1            | [2019-01-17 04:02:21,641] INFO KafkaConfig values: 
kafka_1            |    advertised.host.name = kafka
kafka_1            |    advertised.listeners = null
kafka_1            |    advertised.port = 9092
kafka_1            |    alter.config.policy.class.name = null
kafka_1            |    alter.log.dirs.replication.quota.window.num = 11
kafka_1            |    alter.log.dirs.replication.quota.window.size.seconds = 1
kafka_1            |    authorizer.class.name = 
kafka_1            |    auto.create.topics.enable = true
kafka_1            |    auto.leader.rebalance.enable = true
kafka_1            |    background.threads = 10
kafka_1            |    broker.id = -1
kafka_1            |    broker.id.generation.enable = true
kafka_1            |    broker.rack = null
kafka_1            |    client.quota.callback.class = null
kafka_1            |    compression.type = producer
kafka_1            |    connection.failed.authentication.delay.ms = 100
kafka_1            |    connections.max.idle.ms = 600000
kafka_1            |    controlled.shutdown.enable = true
kafka_1            |    controlled.shutdown.max.retries = 3
kafka_1            |    controlled.shutdown.retry.backoff.ms = 5000
kafka_1            |    controller.socket.timeout.ms = 30000
kafka_1            |    create.topic.policy.class.name = null
kafka_1            |    default.replication.factor = 1
kafka_1            |    delegation.token.expiry.check.interval.ms = 3600000
kafka_1            |    delegation.token.expiry.time.ms = 86400000
kafka_1            |    delegation.token.master.key = null
kafka_1            |    delegation.token.max.lifetime.ms = 604800000
kafka_1            |    delete.records.purgatory.purge.interval.requests = 1
kafka_1            |    delete.topic.enable = true
kafka_1            |    fetch.purgatory.purge.interval.requests = 1000
kafka_1            |    group.initial.rebalance.delay.ms = 0
kafka_1            |    group.max.session.timeout.ms = 300000
kafka_1            |    group.min.session.timeout.ms = 6000
kafka_1            |    host.name = 
kafka_1            |    inter.broker.listener.name = null
kafka_1            |    inter.broker.protocol.version = 2.1-IV2
kafka_1            |    kafka.metrics.polling.interval.secs = 10
kafka_1            |    kafka.metrics.reporters = []
kafka_1            |    leader.imbalance.check.interval.seconds = 300
kafka_1            |    leader.imbalance.per.broker.percentage = 10
kafka_1            |    listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
kafka_1            |    listeners = null
kafka_1            |    log.cleaner.backoff.ms = 3600
kafka_1            |    log.cleaner.dedupe.buffer.size = 134217728
kafka_1            |    log.cleaner.delete.retention.ms = 86400
kafka_1            |    log.cleaner.enable = true
kafka_1            |    log.cleaner.io.buffer.load.factor = 0.9
kafka_1            |    log.cleaner.io.buffer.size = 524288
kafka_1            |    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka_1            |    log.cleaner.min.cleanable.ratio = 0.5
kafka_1            |    log.cleaner.min.compaction.lag.ms = 0
kafka_1            |    log.cleaner.threads = 1
kafka_1            |    log.cleanup.policy = [delete]
kafka_1            |    log.dir = /tmp/kafka-logs
kafka_1            |    log.dirs = /kafka/kafka-logs-df0b53224909
kafka_1            |    log.flush.interval.messages = 9223372036854775807
kafka_1            |    log.flush.interval.ms = null
kafka_1            |    log.flush.offset.checkpoint.interval.ms = 60000
kafka_1            |    log.flush.scheduler.interval.ms = 9223372036854775807
kafka_1            |    log.flush.start.offset.checkpoint.interval.ms = 60000
kafka_1            |    log.index.interval.bytes = 4096
kafka_1            |    log.index.size.max.bytes = 10485760
kafka_1            |    log.message.downconversion.enable = true
kafka_1            |    log.message.format.version = 2.1-IV2
kafka_1            |    log.message.timestamp.difference.max.ms = 9223372036854775807
kafka_1            |    log.message.timestamp.type = CreateTime
kafka_1            |    log.preallocate = false
kafka_1            |    log.retention.bytes = -1
kafka_1            |    log.retention.check.interval.ms = 21600
kafka_1            |    log.retention.hours = 168
kafka_1            |    log.retention.minutes = null
kafka_1            |    log.retention.ms = null
kafka_1            |    log.roll.hours = 168
kafka_1            |    log.roll.jitter.hours = 0
kafka_1            |    log.roll.jitter.ms = null
kafka_1            |    log.roll.ms = null
kafka_1            |    log.segment.bytes = 1073741824
kafka_1            |    log.segment.delete.delay.ms = 60000
kafka_1            |    max.connections.per.ip = 2147483647
kafka_1            |    max.connections.per.ip.overrides = 
kafka_1            |    max.incremental.fetch.session.cache.slots = 1000
kafka_1            |    message.max.bytes = 10485760
kafka_1            |    metric.reporters = []
kafka_1            |    metrics.num.samples = 2
kafka_1            |    metrics.recording.level = INFO
kafka_1            |    metrics.sample.window.ms = 30000
kafka_1            |    min.insync.replicas = 1
kafka_1            |    num.io.threads = 8
kafka_1            |    num.network.threads = 3
kafka_1            |    num.partitions = 4
kafka_1            |    num.recovery.threads.per.data.dir = 1
kafka_1            |    num.replica.alter.log.dirs.threads = null
kafka_1            |    num.replica.fetchers = 1
kafka_1            |    offset.metadata.max.bytes = 4096
kafka_1            |    offsets.commit.required.acks = -1
kafka_1            |    offsets.commit.timeout.ms = 5000
kafka_1            |    offsets.load.buffer.size = 5242880
kafka_1            |    offsets.retention.check.interval.ms = 600000
kafka_1            |    offsets.retention.minutes = 10080
kafka_1            |    offsets.topic.compression.codec = 0
kafka_1            |    offsets.topic.num.partitions = 50
kafka_1            |    offsets.topic.replication.factor = 1
kafka_1            |    offsets.topic.segment.bytes = 104857600
kafka_1            |    password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
kafka_1            |    password.encoder.iterations = 4096
kafka_1            |    password.encoder.key.length = 128
kafka_1            |    password.encoder.keyfactory.algorithm = null
kafka_1            |    password.encoder.old.secret = null
kafka_1            |    password.encoder.secret = null
kafka_1            |    port = 9092
kafka_1            |    principal.builder.class = null
kafka_1            |    producer.purgatory.purge.interval.requests = 1000
kafka_1            |    queued.max.request.bytes = -1
kafka_1            |    queued.max.requests = 500
kafka_1            |    quota.consumer.default = 9223372036854775807
kafka_1            |    quota.producer.default = 9223372036854775807
kafka_1            |    quota.window.num = 11
kafka_1            |    quota.window.size.seconds = 1
kafka_1            |    replica.fetch.backoff.ms = 1000
kafka_1            |    replica.fetch.max.bytes = 10485760
kafka_1            |    replica.fetch.min.bytes = 1
kafka_1            |    replica.fetch.response.max.bytes = 10485760
kafka_1            |    replica.fetch.wait.max.ms = 500
kafka_1            |    replica.high.watermark.checkpoint.interval.ms = 5000
kafka_1            |    replica.lag.time.max.ms = 10000
kafka_1            |    replica.socket.receive.buffer.bytes = 65536
kafka_1            |    replica.socket.timeout.ms = 30000
kafka_1            |    replication.quota.window.num = 11
kafka_1            |    replication.quota.window.size.seconds = 1
kafka_1            |    request.timeout.ms = 30000
kafka_1            |    reserved.broker.max.id = 1000
kafka_1            |    sasl.client.callback.handler.class = null
kafka_1            |    sasl.enabled.mechanisms = [GSSAPI]
kafka_1            |    sasl.jaas.config = null
kafka_1            |    sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka_1            |    sasl.kerberos.min.time.before.relogin = 60000
kafka_1            |    sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka_1            |    sasl.kerberos.service.name = null
kafka_1            |    sasl.kerberos.ticket.renew.jitter = 0.05
kafka_1            |    sasl.kerberos.ticket.renew.window.factor = 0.8
kafka_1            |    sasl.login.callback.handler.class = null
kafka_1            |    sasl.login.class = null
kafka_1            |    sasl.login.refresh.buffer.seconds = 300
kafka_1            |    sasl.login.refresh.min.period.seconds = 60
kafka_1            |    sasl.login.refresh.window.factor = 0.8
kafka_1            |    sasl.login.refresh.window.jitter = 0.05
kafka_1            |    sasl.mechanism.inter.broker.protocol = GSSAPI
kafka_1            |    sasl.server.callback.handler.class = null
kafka_1            |    security.inter.broker.protocol = PLAINTEXT
kafka_1            |    socket.receive.buffer.bytes = 102400
kafka_1            |    socket.request.max.bytes = 104857600
kafka_1            |    socket.send.buffer.bytes = 102400
kafka_1            |    ssl.cipher.suites = []
kafka_1            |    ssl.client.auth = none
kafka_1            |    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka_1            |    ssl.endpoint.identification.algorithm = https
kafka_1            |    ssl.key.password = null
kafka_1            |    ssl.keymanager.algorithm = SunX509
kafka_1            |    ssl.keystore.location = null
kafka_1            |    ssl.keystore.password = null
kafka_1            |    ssl.keystore.type = JKS
kafka_1            |    ssl.protocol = TLS
kafka_1            |    ssl.provider = null
kafka_1            |    ssl.secure.random.implementation = null
kafka_1            |    ssl.trustmanager.algorithm = PKIX
kafka_1            |    ssl.truststore.location = null
kafka_1            |    ssl.truststore.password = null
kafka_1            |    ssl.truststore.type = JKS
kafka_1            |    transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka_1            |    transaction.max.timeout.ms = 900000
kafka_1            |    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka_1            |    transaction.state.log.load.buffer.size = 5242880
kafka_1            |    transaction.state.log.min.isr = 1
kafka_1            |    transaction.state.log.num.partitions = 50
kafka_1            |    transaction.state.log.replication.factor = 1
kafka_1            |    transaction.state.log.segment.bytes = 104857600
kafka_1            |    transactional.id.expiration.ms = 604800000
kafka_1            |    unclean.leader.election.enable = false
kafka_1            |    zookeeper.connect = zookeeper:2181
kafka_1            |    zookeeper.connection.timeout.ms = 6000
kafka_1            |    zookeeper.max.in.flight.requests = 10
kafka_1            |    zookeeper.session.timeout.ms = 6000
kafka_1            |    zookeeper.set.acl = false
kafka_1            |    zookeeper.sync.time.ms = 2000
kafka_1            |  (kafka.server.KafkaConfig)
kafka_1            | [2019-01-17 04:02:21,696] INFO KafkaConfig values: 
kafka_1            |    advertised.host.name = kafka
kafka_1            |    advertised.listeners = null
kafka_1            |    advertised.port = 9092
kafka_1            |    alter.config.policy.class.name = null
kafka_1            |    alter.log.dirs.replication.quota.window.num = 11
kafka_1            |    alter.log.dirs.replication.quota.window.size.seconds = 1
kafka_1            |    authorizer.class.name = 
kafka_1            |    auto.create.topics.enable = true
kafka_1            |    auto.leader.rebalance.enable = true
kafka_1            |    background.threads = 10
kafka_1            |    broker.id = -1
kafka_1            |    broker.id.generation.enable = true
kafka_1            |    broker.rack = null
kafka_1            |    client.quota.callback.class = null
kafka_1            |    compression.type = producer
kafka_1            |    connection.failed.authentication.delay.ms = 100
kafka_1            |    connections.max.idle.ms = 600000
kafka_1            |    controlled.shutdown.enable = true
kafka_1            |    controlled.shutdown.max.retries = 3
kafka_1            |    controlled.shutdown.retry.backoff.ms = 5000
kafka_1            |    controller.socket.timeout.ms = 30000
kafka_1            |    create.topic.policy.class.name = null
kafka_1            |    default.replication.factor = 1
kafka_1            |    delegation.token.expiry.check.interval.ms = 3600000
kafka_1            |    delegation.token.expiry.time.ms = 86400000
kafka_1            |    delegation.token.master.key = null
kafka_1            |    delegation.token.max.lifetime.ms = 604800000
kafka_1            |    delete.records.purgatory.purge.interval.requests = 1
kafka_1            |    delete.topic.enable = true
kafka_1            |    fetch.purgatory.purge.interval.requests = 1000
kafka_1            |    group.initial.rebalance.delay.ms = 0
kafka_1            |    group.max.session.timeout.ms = 300000
kafka_1            |    group.min.session.timeout.ms = 6000
kafka_1            |    host.name = 
kafka_1            |    inter.broker.listener.name = null
kafka_1            |    inter.broker.protocol.version = 2.1-IV2
kafka_1            |    kafka.metrics.polling.interval.secs = 10
kafka_1            |    kafka.metrics.reporters = []
kafka_1            |    leader.imbalance.check.interval.seconds = 300
kafka_1            |    leader.imbalance.per.broker.percentage = 10
kafka_1            |    listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
kafka_1            |    listeners = null
kafka_1            |    log.cleaner.backoff.ms = 3600
kafka_1            |    log.cleaner.dedupe.buffer.size = 134217728
kafka_1            |    log.cleaner.delete.retention.ms = 86400
kafka_1            |    log.cleaner.enable = true
kafka_1            |    log.cleaner.io.buffer.load.factor = 0.9
kafka_1            |    log.cleaner.io.buffer.size = 524288
kafka_1            |    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka_1            |    log.cleaner.min.cleanable.ratio = 0.5
kafka_1            |    log.cleaner.min.compaction.lag.ms = 0
kafka_1            |    log.cleaner.threads = 1
kafka_1            |    log.cleanup.policy = [delete]
kafka_1            |    log.dir = /tmp/kafka-logs
kafka_1            |    log.dirs = /kafka/kafka-logs-df0b53224909
kafka_1            |    log.flush.interval.messages = 9223372036854775807
kafka_1            |    log.flush.interval.ms = null
kafka_1            |    log.flush.offset.checkpoint.interval.ms = 60000
kafka_1            |    log.flush.scheduler.interval.ms = 9223372036854775807
kafka_1            |    log.flush.start.offset.checkpoint.interval.ms = 60000
kafka_1            |    log.index.interval.bytes = 4096
kafka_1            |    log.index.size.max.bytes = 10485760
kafka_1            |    log.message.downconversion.enable = true
kafka_1            |    log.message.format.version = 2.1-IV2
kafka_1            |    log.message.timestamp.difference.max.ms = 9223372036854775807
kafka_1            |    log.message.timestamp.type = CreateTime
kafka_1            |    log.preallocate = false
kafka_1            |    log.retention.bytes = -1
kafka_1            |    log.retention.check.interval.ms = 21600
kafka_1            |    log.retention.hours = 168
kafka_1            |    log.retention.minutes = null
kafka_1            |    log.retention.ms = null
kafka_1            |    log.roll.hours = 168
kafka_1            |    log.roll.jitter.hours = 0
kafka_1            |    log.roll.jitter.ms = null
kafka_1            |    log.roll.ms = null
kafka_1            |    log.segment.bytes = 1073741824
kafka_1            |    log.segment.delete.delay.ms = 60000
kafka_1            |    max.connections.per.ip = 2147483647
kafka_1            |    max.connections.per.ip.overrides = 
kafka_1            |    max.incremental.fetch.session.cache.slots = 1000
kafka_1            |    message.max.bytes = 10485760
kafka_1            |    metric.reporters = []
kafka_1            |    metrics.num.samples = 2
kafka_1            |    metrics.recording.level = INFO
kafka_1            |    metrics.sample.window.ms = 30000
kafka_1            |    min.insync.replicas = 1
kafka_1            |    num.io.threads = 8
kafka_1            |    num.network.threads = 3
kafka_1            |    num.partitions = 4
kafka_1            |    num.recovery.threads.per.data.dir = 1
kafka_1            |    num.replica.alter.log.dirs.threads = null
kafka_1            |    num.replica.fetchers = 1
kafka_1            |    offset.metadata.max.bytes = 4096
kafka_1            |    offsets.commit.required.acks = -1
kafka_1            |    offsets.commit.timeout.ms = 5000
kafka_1            |    offsets.load.buffer.size = 5242880
kafka_1            |    offsets.retention.check.interval.ms = 600000
kafka_1            |    offsets.retention.minutes = 10080
kafka_1            |    offsets.topic.compression.codec = 0
kafka_1            |    offsets.topic.num.partitions = 50
kafka_1            |    offsets.topic.replication.factor = 1
kafka_1            |    offsets.topic.segment.bytes = 104857600
kafka_1            |    password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
kafka_1            |    password.encoder.iterations = 4096
kafka_1            |    password.encoder.key.length = 128
kafka_1            |    password.encoder.keyfactory.algorithm = null
kafka_1            |    password.encoder.old.secret = null
kafka_1            |    password.encoder.secret = null
kafka_1            |    port = 9092
kafka_1            |    principal.builder.class = null
kafka_1            |    producer.purgatory.purge.interval.requests = 1000
kafka_1            |    queued.max.request.bytes = -1
kafka_1            |    queued.max.requests = 500
kafka_1            |    quota.consumer.default = 9223372036854775807
kafka_1            |    quota.producer.default = 9223372036854775807
kafka_1            |    quota.window.num = 11
kafka_1            |    quota.window.size.seconds = 1
kafka_1            |    replica.fetch.backoff.ms = 1000
kafka_1            |    replica.fetch.max.bytes = 10485760
kafka_1            |    replica.fetch.min.bytes = 1
kafka_1            |    replica.fetch.response.max.bytes = 10485760
kafka_1            |    replica.fetch.wait.max.ms = 500
kafka_1            |    replica.high.watermark.checkpoint.interval.ms = 5000
kafka_1            |    replica.lag.time.max.ms = 10000
kafka_1            |    replica.socket.receive.buffer.bytes = 65536
kafka_1            |    replica.socket.timeout.ms = 30000
kafka_1            |    replication.quota.window.num = 11
kafka_1            |    replication.quota.window.size.seconds = 1
kafka_1            |    request.timeout.ms = 30000
kafka_1            |    reserved.broker.max.id = 1000
kafka_1            |    sasl.client.callback.handler.class = null
kafka_1            |    sasl.enabled.mechanisms = [GSSAPI]
kafka_1            |    sasl.jaas.config = null
kafka_1            |    sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka_1            |    sasl.kerberos.min.time.before.relogin = 60000
kafka_1            |    sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka_1            |    sasl.kerberos.service.name = null
kafka_1            |    sasl.kerberos.ticket.renew.jitter = 0.05
kafka_1            |    sasl.kerberos.ticket.renew.window.factor = 0.8
kafka_1            |    sasl.login.callback.handler.class = null
kafka_1            |    sasl.login.class = null
kafka_1            |    sasl.login.refresh.buffer.seconds = 300
kafka_1            |    sasl.login.refresh.min.period.seconds = 60
kafka_1            |    sasl.login.refresh.window.factor = 0.8
kafka_1            |    sasl.login.refresh.window.jitter = 0.05
kafka_1            |    sasl.mechanism.inter.broker.protocol = GSSAPI
kafka_1            |    sasl.server.callback.handler.class = null
kafka_1            |    security.inter.broker.protocol = PLAINTEXT
kafka_1            |    socket.receive.buffer.bytes = 102400
kafka_1            |    socket.request.max.bytes = 104857600
kafka_1            |    socket.send.buffer.bytes = 102400
kafka_1            |    ssl.cipher.suites = []
kafka_1            |    ssl.client.auth = none
kafka_1            |    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka_1            |    ssl.endpoint.identification.algorithm = https
kafka_1            |    ssl.key.password = null
kafka_1            |    ssl.keymanager.algorithm = SunX509
kafka_1            |    ssl.keystore.location = null
kafka_1            |    ssl.keystore.password = null
kafka_1            |    ssl.keystore.type = JKS
kafka_1            |    ssl.protocol = TLS
kafka_1            |    ssl.provider = null
kafka_1            |    ssl.secure.random.implementation = null
kafka_1            |    ssl.trustmanager.algorithm = PKIX
kafka_1            |    ssl.truststore.location = null
kafka_1            |    ssl.truststore.password = null
kafka_1            |    ssl.truststore.type = JKS
kafka_1            |    transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka_1            |    transaction.max.timeout.ms = 900000
kafka_1            |    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka_1            |    transaction.state.log.load.buffer.size = 5242880
kafka_1            |    transaction.state.log.min.isr = 1
kafka_1            |    transaction.state.log.num.partitions = 50
kafka_1            |    transaction.state.log.replication.factor = 1
kafka_1            |    transaction.state.log.segment.bytes = 104857600
kafka_1            |    transactional.id.expiration.ms = 604800000
kafka_1            |    unclean.leader.election.enable = false
kafka_1            |    zookeeper.connect = zookeeper:2181
kafka_1            |    zookeeper.connection.timeout.ms = 6000
kafka_1            |    zookeeper.max.in.flight.requests = 10
kafka_1            |    zookeeper.session.timeout.ms = 6000
kafka_1            |    zookeeper.set.acl = false
kafka_1            |    zookeeper.sync.time.ms = 2000
kafka_1            |  (kafka.server.KafkaConfig)
kafka_1            | [2019-01-17 04:02:21,813] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1            | [2019-01-17 04:02:21,815] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1            | [2019-01-17 04:02:21,825] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka_1            | [2019-01-17 04:02:21,852] INFO Log directory /kafka/kafka-logs-df0b53224909 not found, creating it. (kafka.log.LogManager)
kafka_1            | [2019-01-17 04:02:21,863] ERROR Failed to create or validate data directory /kafka/kafka-logs-df0b53224909 (kafka.server.LogDirFailureChannel)
kafka_1            | java.io.IOException: Failed to create data directory /kafka/kafka-logs-df0b53224909
kafka_1            |    at kafka.log.LogManager.$anonfun$createAndValidateLogDirs$1(LogManager.scala:158)
kafka_1            |    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
kafka_1            |    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
kafka_1            |    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
kafka_1            |    at kafka.log.LogManager.createAndValidateLogDirs(LogManager.scala:149)
kafka_1            |    at kafka.log.LogManager.<init>(LogManager.scala:80)
kafka_1            |    at kafka.log.LogManager$.apply(LogManager.scala:1005)
kafka_1            |    at kafka.server.KafkaServer.startup(KafkaServer.scala:237)
kafka_1            |    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
kafka_1            |    at kafka.Kafka$.main(Kafka.scala:75)
kafka_1            |    at kafka.Kafka.main(Kafka.scala)
kafka_1            | [2019-01-17 04:02:21,867] ERROR Shutdown broker because none of the specified log dirs from /kafka/kafka-logs-df0b53224909 can be created or validated (kafka.log.LogManager)
zookeeper_1        | 2019-01-17 04:02:22,228 [myid:] - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception
zookeeper_1        | EndOfStreamException: Unable to read additional data from client sessionid 0x16859f93f640000, likely client has closed socket
zookeeper_1        |    at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
zookeeper_1        |    at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
zookeeper_1        |    at java.lang.Thread.run(Thread.java:745)
zookeeper_1        | 2019-01-17 04:02:22,229 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1008] - Closed socket connection for client /172.19.0.200:46242 which had sessionid 0x16859f93f640000
dig_kafka_1 exited with code 1
zookeeper_1        | 2019-01-17 04:02:28,007 [myid:] - INFO  [SessionTracker:ZooKeeperServer@358] - Expiring session 0x16859f93f640000, timeout of 6000ms exceeded
zookeeper_1        | 2019-01-17 04:02:28,008 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x16859f93f640000

Additional info:

docker-compose version 1.23.2, build 1110ad01
[mbach@centos-workstation dig-etl-engine]$ docker --version
Docker version 1.13.1, build 07f3374/1.13.1
[mbach@centos-workstation dig-etl-engine]$ uname -a
Linux centos-workstation 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
GreatYYX commented 5 years ago

Hi @mbach04, Please use sudo while firing upengine.sh.

mbach04 commented 5 years ago

I’ve been starting engine up as root, and as a user with privilege via sudo.

saggu commented 5 years ago

Couple of things to check,

  1. Did you follow the specific instructions for Linux machines?
  2. Is it possible that you cloned the repo in a directory, where root does not have access?

This definitely looks like a permission issue. What does your .env file looks like ? We have run the setup on centos machines, its never crashes like this.

mbach04 commented 5 years ago

I moved to ubuntu 16.04 and am having no issues launching at this point. Please feel free to close. Also happy to keep diving in on the Centos problem if there's any benefit. Have you guys considered prepping this for k8s or OpenShift?