Cyb3rWard0g / HELK

The Hunting ELK
GNU General Public License v3.0
3.77k stars 683 forks source link

After last update sending events not working #300

Closed sidomir7 closed 5 years ago

sidomir7 commented 5 years ago

Hi, after downloading new version (downloaded today 12.7.2019) I was not able send logs to kafka or logstash with winlogbeat 7.1.1 or 8.0.0 (online logs, or offline logs)

I had older VM with same version of HELK but downloaded couple weeks ago (2 weeks i think):

**********************************************
**          HELK - THE HUNTING ELK          **
**                                          **
** Author: Roberto Rodriguez (@Cyb3rWard0g) **
** HELK build version: v0.1.8-alpha05292019 **
** HELK ELK version: 7.1.0                  **
** License: GPL-3.0                         **
**********************************************

And this works fine. Here are the logs from winlogbeat 8.0.0 when it is not working (new HELK build):

2019-07-12T14:43:39.478+0100    INFO    instance/beat.go:606    Home path: [C:\Users\IEUser\Documents\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64] Config path: [C:\Users\IEUser\Documents\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64] Data path: [C:\Users\IEUser\Documents\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64\data] Logs path: [C:\Users\IEUser\Documents\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64\logs]
2019-07-12T14:43:39.479+0100    INFO    instance/beat.go:614    Beat ID: 8c5d2858-d75b-4a27-9e8f-b62244c0d29f
2019-07-12T14:43:39.479+0100    INFO    [beat]  instance/beat.go:902    Beat info       {"system_info": {"beat": {"path": {"config": "C:\\Users\\IEUser\\Documents\\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64", "data": "C:\\Users\\IEUser\\Documents\\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64\\data", "home": "C:\\Users\\IEUser\\Documents\\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64", "logs": "C:\\Users\\IEUser\\Documents\\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64\\logs"}, "type": "winlogbeat", "uuid": "8c5d2858-d75b-4a27-9e8f-b62244c0d29f"}}}
2019-07-12T14:43:39.479+0100    INFO    [beat]  instance/beat.go:911    Build info      {"system_info": {"build": {"commit": "d21cf680bc8b7e923ea257a2050cacebec81763d", "libbeat": "8.0.0", "time": "2019-07-08T02:33:15.000Z", "version": "8.0.0"}}}
2019-07-12T14:43:39.479+0100    INFO    [beat]  instance/beat.go:914    Go runtime info {"system_info": {"go": {"os":"windows","arch":"amd64","max_procs":2,"version":"go1.12.4"}}}
2019-07-12T14:43:39.491+0100    INFO    [beat]  instance/beat.go:918    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-07-12T12:46:43.96+01:00","name":"MSEDGEWIN10","ip":["fe80::70e5:e80d:47c8:6ac5/64","192.168.1.117/24","172.16.31.20/24","fe80::5ce6:745e:f34d:81d1/64","169.254.129.209/16","::1/128","127.0.0.1/8"],"kernel_version":"10.0.17763.615 (WinBuild.160101.0800)","mac":["00:0c:29:a0:c6:b7","00:0c:29:a0:c6:c1","02:00:4c:4f:4f:50"],"os":{"family":"windows","platform":"windows","name":"Windows 10 Enterprise Evaluation","version":"10.0","major":10,"minor":0,"patch":0,"build":"17763.615"},"timezone":"CET","timezone_offset_sec":3600,"id":"43199d79-b2b3-4f66-a33d-cd0f7969970a"}}}
2019-07-12T14:43:39.497+0100    INFO    [beat]  instance/beat.go:947    Process info    {"system_info": {"process": {"cwd": "C:\\Users\\IEUser\\Documents\\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64", "exe": "C:\\Users\\IEUser\\Documents\\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64\\winlogbeat.exe", "name": "winlogbeat.exe", "pid": 2256, "ppid": 7452, "start_time": "2019-07-12T14:43:39.453+0100"}}}
2019-07-12T14:43:39.498+0100    INFO    instance/beat.go:292    Setup Beat: winlogbeat; Version: 8.0.0
2019-07-12T14:43:39.499+0100    INFO    [publisher]     pipeline/module.go:97   Beat name: MSEDGEWIN10
2019-07-12T14:43:39.500+0100    INFO    beater/winlogbeat.go:69 State will be read from and persisted to C:\Users\IEUser\Documents\winlogbeat-8.0.0-SNAPSHOT-windows-x86_64\data\evtx-registry.yml
2019-07-12T14:43:39.501+0100    INFO    instance/beat.go:421    winlogbeat start running.
2019-07-12T14:43:39.501+0100    INFO    [monitoring]    log/log.go:118  Starting metrics logging every 30s
2019-07-12T14:43:40.416+0100    INFO    beater/eventlogger.go:113       EventLog[C:\Users\IEUser\Documents\logy2\application.evtx] Stop processing.
2019-07-12T14:43:40.417+0100    INFO    beater/winlogbeat.go:157        Shutdown will wait max 1m0s for the remaining 0 events to publish.
2019-07-12T14:43:40.418+0100    INFO    [monitoring]    log/log.go:153  Total non-zero metrics  {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":15,"time":{"ms":15}},"total":{"ticks":61,"time":{"ms":61},"value":61},"user":{"ticks":46,"time":{"ms":46}}},"handles":{"open":190},"info":{"ephemeral_id":"733a5ebe-c41c-4fd4-b01a-c2aa6889d2b7","uptime":{"ms":1057}},"memstats":{"gc_next":4194304,"memory_alloc":2475176,"memory_total":4002416,"rss":20803584},"runtime":{"goroutines":14}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"kafka"},"pipeline":{"clients":0,"events":{"active":0}}},"system":{"cpu":{"cores":2}}}}}
2019-07-12T14:43:40.419+0100    INFO    [monitoring]    log/log.go:154  Uptime: 1.0593347s
2019-07-12T14:43:40.420+0100    INFO    [monitoring]    log/log.go:131  Stopping metrics logging.
2019-07-12T14:43:40.421+0100    INFO    instance/beat.go:431    winlogbeat stopped.

With same config but with older HELK it is working fine so problem is not in the winlogbeat.

Cyb3rWard0g commented 5 years ago

Hey @sidomir7 , can you provide Logstash or Kafka logs please? sudo docker logs helk-kafka-broker and sudo docker logs helk-logstash

Cyb3rWard0g commented 5 years ago

I just tested latest HELK version as shown below and it is working fine.

**********************************************
**          HELK - THE HUNTING ELK          **
**                                          **
** Author: Roberto Rodriguez (@Cyb3rWard0g) **
** HELK build version: v0.1.8-alpha05292019 **
** HELK ELK version: 7.1.0                  **
** License: GPL-3.0                         **
**********************************************

[HELK-INSTALLATION-INFO] HELK being hosted on a Linux box
[HELK-INSTALLATION-INFO] Available Memory: 12451 MBs
[HELK-INSTALLATION-INFO] You're using ubuntu version xenial

*****************************************************
*      HELK - Docker Compose Build Choices          *
*****************************************************

1. KAFKA + KSQL + ELK + NGNIX
2. KAFKA + KSQL + ELK + NGNIX + ELASTALERT
3. KAFKA + KSQL + ELK + NGNIX + SPARK + JUPYTER
4. KAFKA + KSQL + ELK + NGNIX + SPARK + JUPYTER + ELASTALERT

Enter build choice [ 1 - 4]: 4
[HELK-INSTALLATION-INFO] HELK build set to 4
[HELK-INSTALLATION-INFO] Set HELK elastic subscription (basic or trial): basic
[HELK-INSTALLATION-INFO] Set HELK IP. Default value is your current IP: 192.168.64.138
[HELK-INSTALLATION-INFO] Set HELK Kibana UI Password: hunting
[HELK-INSTALLATION-INFO] Verify HELK Kibana UI Password: hunting 
[HELK-INSTALLATION-INFO] Installing htpasswd..
[HELK-INSTALLATION-INFO] Docker already installed
[HELK-INSTALLATION-INFO] Making sure you assigned enough disk space to the current Docker base directory
[HELK-INSTALLATION-INFO] Available Docker Disk: 67 GBs
[HELK-INSTALLATION-INFO] Checking local vm.max_map_count variable and setting it to 4120294
[HELK-INSTALLATION-INFO] Building & running HELK from helk-kibana-notebook-analysis-alert-basic.yml file..
[HELK-INSTALLATION-INFO] Waiting for some services to be up .....

***********************************************************************************
** [HELK-INSTALLATION-INFO] HELK WAS INSTALLED SUCCESSFULLY                      **
** [HELK-INSTALLATION-INFO] USE THE FOLLOWING SETTINGS TO INTERACT WITH THE HELK **
***********************************************************************************

HELK KIBANA URL: https://192.168.64.138
HELK KIBANA USER: helk
HELK KIBANA PASSWORD: hunting
HELK SPARK MASTER UI: http://192.168.64.138:8080
HELK JUPYTER SERVER URL: http://192.168.64.138/jupyter
HELK JUPYTER CURRENT TOKEN: 3e6201ab24a8d1a38daa28113e16190177a48753e85b2d63
HELK ZOOKEEPER: 192.168.64.138:2181
HELK KSQL SERVER: 192.168.64.138:8088

IT IS HUNTING SEASON!!!!!

image

neu5ron commented 5 years ago

in addition to the logs above that Roberto asked for - one question: is this a fresh install or an “upgrade”?

sidomir7 commented 5 years ago

@Cyb3rWard0g Sorry, for delay I can provide logs on monaday. @neu5ron it was fresh install.

neu5ron commented 5 years ago

so if you already ran the winlogbeat once, what happens is it will keep the offset of last event read. therefore since this is a standalone/raw evtx file - no new events will be written (as im sure you know) - therefore winlogbeat will not resend the events. if you can find the cache/offset file - then delete - and then rerun winlogbeat. the file is usually located from within the winlogbeat directory you ran from. should be under a folder named data I believe. let me know if that works

sidomir7 commented 5 years ago

Hi, sending logs: Logstash

[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Setting Elasticsearch URL to http://helk-elasticsearch:9200
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Waiting for elasticsearch URI to be accessible..
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading templates to elasticsearch..
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-all-default.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading indexme.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-winevent-all.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-winevent-winlogbeat-param-fields.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading powershell-direct-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-application-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-powershell-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-security-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-sysmon-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-system-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-wmiactivity-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-not-ip.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-meta-enrichment-for-endpoints.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-meta-enrichment-for-powershell.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-fingerprints-for-endpoints.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-fingerprints-powershell.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-not-ip.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-dst.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-dst-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-src.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-src-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-dst.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-dst-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-src.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-src-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-any-fields.json template to elasticsearch..
{"acknowledged":true}[HELK-ES-DOCKER-INSTALLATION-INFO] Configuring elasticsearch cluster settings..
{"acknowledged":true,"persistent":{"cluster":{"max_shards_per_node":"3000"},"indices":{"breaker":{"request":{"limit":"70%"}}},"search":{"max_open_scroll_context":"15000"}},"transient":{"cluster":{"max_shards_per_node":"3000"},"indices":{"breaker":{"request":{"limit":"70%"}}},"search":{"max_open_scroll_context":"15000"}}}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Checking Logstash plugins..
logstash-filter-prune
Installing file: /usr/share/logstash/plugins/logstash-offline-plugins-7.0.1.zip
Install successful
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Logstash plugins installed via offline package..
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Setting LS_JAVA_OPTS to -Xms1186m -Xmx2373m
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Running docker-entrypoint script..
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-07-12T08:44:19,450][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:20,506][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,158][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,166][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,202][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,206][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,219][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,222][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,289][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,293][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,324][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,327][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,393][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,397][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,417][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,421][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,441][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,444][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,451][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,460][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,467][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,483][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,503][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,506][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,514][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,517][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,534][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,537][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:39,547][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:39,550][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T08:44:43,262][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://helk-elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"c4874b4cfe08abd56717d3ab13f1c13581dffcfe83fa2fbabb4fa0f7d4748207", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_3e0c3c21-bbaf-4469-8ecf-aaf3ee6fac02", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-07-12T08:44:43,615][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T08:44:43,623][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:08:20,686][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '429' contacting Elasticsearch at URL 'http://helk-elasticsearch:9200/_xpack'"}
[2019-07-12T09:10:20,660][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '429' contacting Elasticsearch at URL 'http://helk-elasticsearch:9200/_xpack'"}
[2019-07-12T09:12:04,386][WARN ][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=helk_logstash] Connection to node 1 (/192.168.1.116:9092) could not be established. Broker may not be available.
[2019-07-12T09:12:05,295][WARN ][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=helk_logstash] Connection to node 1 (/192.168.1.116:9092) could not be established. Broker may not be available.
[2019-07-12T09:12:06,355][WARN ][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=helk_logstash] Connection to node 1 (/192.168.1.116:9092) could not be established. Broker may not be available.
[2019-07-12T09:12:07,566][WARN ][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=helk_logstash] Connection to node 1 (/192.168.1.116:9092) could not be established. Broker may not be available.
[2019-07-12T09:12:08,676][WARN ][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=helk_logstash] Connection to node 1 (/192.168.1.116:9092) could not be established. Broker may not be available.
[2019-07-12T09:12:09,533][WARN ][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=helk_logstash] Connection to node 1 (/192.168.1.116:9092) could not be established. Broker may not be available.
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Setting Elasticsearch URL to http://helk-elasticsearch:9200
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Waiting for elasticsearch URI to be accessible..
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading templates to elasticsearch..
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-all-default.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading indexme.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-winevent-all.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-winevent-winlogbeat-param-fields.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading powershell-direct-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-application-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-powershell-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-security-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-sysmon-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-system-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-wmiactivity-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-not-ip.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-meta-enrichment-for-endpoints.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-meta-enrichment-for-powershell.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-fingerprints-for-endpoints.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-fingerprints-powershell.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-not-ip.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-dst.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-dst-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-src.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-src-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-dst.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-dst-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-src.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-src-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-any-fields.json template to elasticsearch..
{"acknowledged":true}[HELK-ES-DOCKER-INSTALLATION-INFO] Configuring elasticsearch cluster settings..
{"acknowledged":true,"persistent":{"cluster":{"max_shards_per_node":"3000"},"indices":{"breaker":{"request":{"limit":"70%"}}},"search":{"max_open_scroll_context":"15000"}},"transient":{"cluster":{"max_shards_per_node":"3000"},"indices":{"breaker":{"request":{"limit":"70%"}}},"search":{"max_open_scroll_context":"15000"}}}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Checking Logstash plugins..
logstash-filter-prune
logstash-filter-i18n
logstash-input-wmi
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Plugins are already installed
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Setting LS_JAVA_OPTS to -Xms1303m -Xmx2607m
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Running docker-entrypoint script..
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-07-12T09:18:35,749][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:35,921][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,234][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,244][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,278][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,281][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,292][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,296][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,352][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,363][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,382][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,384][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,397][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,406][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,435][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,439][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,445][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,455][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,464][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,466][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,481][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,486][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,500][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,503][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,516][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,519][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,523][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,528][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:18:57,533][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:18:57,536][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:19:05,888][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://helk-elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"c4874b4cfe08abd56717d3ab13f1c13581dffcfe83fa2fbabb4fa0f7d4748207", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_5c71ee5a-9547-4aaa-aacc-219b23168c99", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-07-12T09:19:06,086][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T09:19:06,100][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:20:36,115][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '429' contacting Elasticsearch at URL 'http://helk-elasticsearch:9200/_xpack'"}
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Setting Elasticsearch URL to http://helk-elasticsearch:9200
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Waiting for elasticsearch URI to be accessible..
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading templates to elasticsearch..
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-all-default.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading indexme.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-winevent-all.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-winevent-winlogbeat-param-fields.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading powershell-direct-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-application-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-powershell-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-security-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-sysmon-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-system-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-wmiactivity-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-not-ip.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-meta-enrichment-for-endpoints.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-meta-enrichment-for-powershell.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-fingerprints-for-endpoints.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-fingerprints-powershell.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-not-ip.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-dst.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-dst-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-src.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-src-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-dst.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-dst-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-src.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-src-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-any-fields.json template to elasticsearch..
{"acknowledged":true}[HELK-ES-DOCKER-INSTALLATION-INFO] Configuring elasticsearch cluster settings..
{"acknowledged":true,"persistent":{"cluster":{"max_shards_per_node":"3000"},"indices":{"breaker":{"request":{"limit":"70%"}}},"search":{"max_open_scroll_context":"15000"}},"transient":{"cluster":{"max_shards_per_node":"3000"},"indices":{"breaker":{"request":{"limit":"70%"}}},"search":{"max_open_scroll_context":"15000"}}}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Checking Logstash plugins..
logstash-filter-prune
logstash-filter-i18n
logstash-input-wmi
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Plugins are already installed
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Setting LS_JAVA_OPTS to -Xms1332m -Xmx2665m
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Running docker-entrypoint script..
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-07-12T12:57:03,974][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:04,070][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,110][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,118][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,153][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,157][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,214][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,218][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,230][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,234][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,245][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,248][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,280][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,289][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,315][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,318][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,368][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,371][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,382][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,385][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,390][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,396][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,409][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,411][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,429][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,430][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,437][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,439][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:26,457][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:26,459][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T12:57:32,477][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://helk-elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"c4874b4cfe08abd56717d3ab13f1c13581dffcfe83fa2fbabb4fa0f7d4748207", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_75640004-172c-4744-a40f-4a1f4a647bc0", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-07-12T12:57:32,590][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-12T12:57:32,601][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Setting Elasticsearch URL to http://helk-elasticsearch:9200
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Waiting for elasticsearch URI to be accessible..
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading templates to elasticsearch..
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-all-default.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading indexme.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-winevent-all.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-winevent-winlogbeat-param-fields.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading powershell-direct-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-application-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-powershell-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-security-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-sysmon-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-system-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading winevent-wmiactivity-template.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-not-ip.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-meta-enrichment-for-endpoints.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-meta-enrichment-for-powershell.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-fingerprints-for-endpoints.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-fingerprints-powershell.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-not-ip.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-dst.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-dst-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-src.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ip-src-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-dst.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-dst-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-src.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-ipv6-src-nat.json template to elasticsearch..
{"acknowledged":true}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Uploading logs-any-fields.json template to elasticsearch..
{"acknowledged":true}[HELK-ES-DOCKER-INSTALLATION-INFO] Configuring elasticsearch cluster settings..
{"acknowledged":true,"persistent":{"cluster":{"max_shards_per_node":"3000"},"indices":{"breaker":{"request":{"limit":"70%"}}},"search":{"max_open_scroll_context":"15000"}},"transient":{"cluster":{"max_shards_per_node":"3000"},"indices":{"breaker":{"request":{"limit":"70%"}}},"search":{"max_open_scroll_context":"15000"}}}[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Checking Logstash plugins..
logstash-filter-prune
logstash-filter-i18n
logstash-input-wmi
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Plugins are already installed
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Setting LS_JAVA_OPTS to -Xms1470m -Xmx2940m
[HELK-LOGSTASH-DOCKER-INSTALLATION-INFO] Running docker-entrypoint script..
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-07-15T06:13:26,703][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:26,779][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:47,764][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:47,769][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:47,804][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:47,808][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:47,824][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:47,827][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:47,919][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:47,925][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:47,940][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:47,943][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:47,977][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:47,987][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:48,028][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:48,045][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:48,096][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:48,098][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:48,109][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:48,113][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:48,119][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:48,124][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:48,131][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:48,140][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:48,154][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:48,157][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:48,167][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:48,170][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:48,183][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:48,186][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-15T06:13:53,013][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://helk-elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"c4874b4cfe08abd56717d3ab13f1c13581dffcfe83fa2fbabb4fa0f7d4748207", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_3dd5e77d-44a2-44ab-9021-64b01d0ee619", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-07-15T06:13:53,152][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://helk-elasticsearch:9200/"}
[2019-07-15T06:13:53,161][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
sidomir7 commented 5 years ago

Kafka

[2019-07-15 06:11:49,002] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[HELK-DOCKER-INSTALLATION-INFO] Creating Kafka winlogbeat Topic..
[2019-07-15 06:11:49,152] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(__consumer_offsets-22, __consumer_offsets-30, winlogbeat-0, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, SYSMON_JOIN-0, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, _confluent-ksql-wardog_command_topic-0, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, filebeat-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-38, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-13, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager)
[2019-07-15 06:11:49,183] INFO Replica loaded for partition __consumer_offsets-0 with initial high watermark 674 (kafka.cluster.Replica)
[2019-07-15 06:11:49,189] INFO [Partition __consumer_offsets-0 broker=1] __consumer_offsets-0 starts at Leader Epoch 0 from offset 674. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,215] INFO Replica loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,216] INFO [Partition __consumer_offsets-29 broker=1] __consumer_offsets-29 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,265] INFO Replica loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,265] INFO [Partition __consumer_offsets-48 broker=1] __consumer_offsets-48 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,326] INFO Replica loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,327] INFO [Partition __consumer_offsets-10 broker=1] __consumer_offsets-10 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,346] INFO Replica loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,346] INFO [Partition __consumer_offsets-45 broker=1] __consumer_offsets-45 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,351] INFO Replica loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,351] INFO [Partition __consumer_offsets-26 broker=1] __consumer_offsets-26 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,375] INFO Replica loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,375] INFO [Partition __consumer_offsets-7 broker=1] __consumer_offsets-7 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,405] INFO Replica loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,405] INFO [Partition __consumer_offsets-42 broker=1] __consumer_offsets-42 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,433] INFO Replica loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,433] INFO [Partition __consumer_offsets-4 broker=1] __consumer_offsets-4 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,459] INFO Replica loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,459] INFO [Partition __consumer_offsets-23 broker=1] __consumer_offsets-23 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,573] INFO Replica loaded for partition winlogbeat-0 with initial high watermark 3722 (kafka.cluster.Replica)
[2019-07-15 06:11:49,573] INFO [Partition winlogbeat-0 broker=1] winlogbeat-0 starts at Leader Epoch 0 from offset 3722. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,575] INFO Replica loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,575] INFO [Partition __consumer_offsets-1 broker=1] __consumer_offsets-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,675] INFO Replica loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,675] INFO [Partition __consumer_offsets-20 broker=1] __consumer_offsets-20 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,802] INFO Replica loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,802] INFO [Partition __consumer_offsets-39 broker=1] __consumer_offsets-39 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,943] INFO Replica loaded for partition SYSMON_JOIN-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,943] INFO [Partition SYSMON_JOIN-0 broker=1] SYSMON_JOIN-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,982] INFO Replica loaded for partition filebeat-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,983] INFO [Partition filebeat-0 broker=1] filebeat-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:49,991] INFO Replica loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:49,992] INFO [Partition __consumer_offsets-17 broker=1] __consumer_offsets-17 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,003] INFO Replica loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,003] INFO [Partition __consumer_offsets-36 broker=1] __consumer_offsets-36 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,012] INFO Replica loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,012] INFO [Partition __consumer_offsets-14 broker=1] __consumer_offsets-14 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,024] INFO Replica loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,024] INFO [Partition __consumer_offsets-33 broker=1] __consumer_offsets-33 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,090] INFO Replica loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,091] INFO [Partition __consumer_offsets-49 broker=1] __consumer_offsets-49 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,181] INFO Replica loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,181] INFO [Partition __consumer_offsets-11 broker=1] __consumer_offsets-11 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,281] INFO Replica loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,281] INFO [Partition __consumer_offsets-30 broker=1] __consumer_offsets-30 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,307] INFO Replica loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,307] INFO [Partition __consumer_offsets-46 broker=1] __consumer_offsets-46 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,436] INFO Replica loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,437] INFO [Partition __consumer_offsets-27 broker=1] __consumer_offsets-27 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,572] INFO Replica loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,572] INFO [Partition __consumer_offsets-8 broker=1] __consumer_offsets-8 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,783] INFO Replica loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,784] INFO [Partition __consumer_offsets-24 broker=1] __consumer_offsets-24 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,920] INFO Replica loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,921] INFO [Partition __consumer_offsets-43 broker=1] __consumer_offsets-43 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,977] INFO Replica loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,977] INFO [Partition __consumer_offsets-5 broker=1] __consumer_offsets-5 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:50,985] INFO Replica loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:50,985] INFO [Partition __consumer_offsets-21 broker=1] __consumer_offsets-21 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:51,076] INFO Replica loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:51,077] INFO [Partition __consumer_offsets-2 broker=1] __consumer_offsets-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:51,233] INFO Replica loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:51,234] INFO [Partition __consumer_offsets-40 broker=1] __consumer_offsets-40 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[HELK-DOCKER-INSTALLATION-INFO] Creating Kafka SYSMON_JOIN Topic..
[2019-07-15 06:11:51,336] INFO Replica loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:51,337] INFO [Partition __consumer_offsets-37 broker=1] __consumer_offsets-37 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:51,445] INFO Replica loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:51,445] INFO [Partition __consumer_offsets-18 broker=1] __consumer_offsets-18 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:51,455] INFO Replica loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:51,455] INFO [Partition __consumer_offsets-34 broker=1] __consumer_offsets-34 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:51,519] INFO Replica loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:51,519] INFO [Partition __consumer_offsets-15 broker=1] __consumer_offsets-15 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:51,602] INFO Replica loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:51,602] INFO [Partition __consumer_offsets-12 broker=1] __consumer_offsets-12 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:51,748] INFO Replica loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:51,749] INFO [Partition __consumer_offsets-31 broker=1] __consumer_offsets-31 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:51,889] INFO Replica loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:51,889] INFO [Partition __consumer_offsets-9 broker=1] __consumer_offsets-9 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:52,046] INFO Replica loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:52,047] INFO [Partition __consumer_offsets-47 broker=1] __consumer_offsets-47 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:52,261] INFO Replica loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:52,262] INFO [Partition __consumer_offsets-19 broker=1] __consumer_offsets-19 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
[2019-07-15 06:11:52,403] INFO Replica loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:52,404] INFO [Partition __consumer_offsets-28 broker=1] __consumer_offsets-28 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:52,558] INFO Replica loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:52,558] INFO [Partition __consumer_offsets-38 broker=1] __consumer_offsets-38 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:52,656] INFO Replica loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:52,656] INFO [Partition __consumer_offsets-35 broker=1] __consumer_offsets-35 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:52,726] INFO Replica loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:52,726] INFO [Partition __consumer_offsets-6 broker=1] __consumer_offsets-6 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:52,776] INFO Replica loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:52,776] INFO [Partition __consumer_offsets-44 broker=1] __consumer_offsets-44 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:52,858] INFO Replica loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:52,858] INFO [Partition __consumer_offsets-25 broker=1] __consumer_offsets-25 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:52,873] INFO Replica loaded for partition _confluent-ksql-wardog_command_topic-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:52,874] INFO [Partition _confluent-ksql-wardog_command_topic-0 broker=1] _confluent-ksql-wardog_command_topic-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:53,058] INFO Replica loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:53,058] INFO [Partition __consumer_offsets-16 broker=1] __consumer_offsets-16 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[HELK-DOCKER-INSTALLATION-INFO] Creating Kafka filebeat Topic..
[2019-07-15 06:11:53,102] INFO Replica loaded for partition __consumer_offsets-22 with initial high watermark 692 (kafka.cluster.Replica)
[2019-07-15 06:11:53,102] INFO [Partition __consumer_offsets-22 broker=1] __consumer_offsets-22 starts at Leader Epoch 0 from offset 692. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:53,104] INFO Replica loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:53,104] INFO [Partition __consumer_offsets-41 broker=1] __consumer_offsets-41 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:53,150] INFO Replica loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:53,150] INFO [Partition __consumer_offsets-32 broker=1] __consumer_offsets-32 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:53,254] INFO Replica loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:53,255] INFO [Partition __consumer_offsets-3 broker=1] __consumer_offsets-3 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:53,270] INFO Replica loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Replica)
[2019-07-15 06:11:53,270] INFO [Partition __consumer_offsets-13 broker=1] __consumer_offsets-13 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-07-15 06:11:53,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,321] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,322] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,439] INFO [GroupCoordinator 1]: Loading group metadata for helk_logstash with generation 5 (kafka.coordinator.group.GroupCoordinator)
[2019-07-15 06:11:53,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 127 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,497] INFO [GroupCoordinator 1]: Loading group metadata for  with generation 0 (kafka.coordinator.group.GroupCoordinator)
[2019-07-15 06:11:53,497] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 42 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,497] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,498] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,498] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,498] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,498] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,498] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,498] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,499] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,499] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,499] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,499] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,499] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,499] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,499] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,500] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:53,500] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-07-15 06:11:56,179] INFO [Admin Manager on Broker 1]: Updating topic _confluent-ksql-wardog_command_topic with new configuration org.apache.kafka.common.requests.AlterConfigsRequest$Config@33a49eb1 (kafka.server.AdminManager)
[2019-07-15 06:11:56,217] INFO Processing notification(s) to /config/changes (kafka.common.ZkNodeChangeNotificationListener)
[2019-07-15 06:11:56,218] INFO Processing override for entityPath: topics/_confluent-ksql-wardog_command_topic with config: Map(retention.ms -> 9223372036854775807) (kafka.server.DynamicConfigManager)
[2019-07-15 06:12:17,214] INFO [Log partition=winlogbeat-0, dir=/tmp/kafka-logs] Found deletable segments with base offsets [0] due to retention time 14400000ms breach (kafka.log.Log)
[2019-07-15 06:12:17,267] INFO [Log partition=winlogbeat-0, dir=/tmp/kafka-logs] Rolled new log segment at offset 3722 in 51 ms. (kafka.log.Log)
[2019-07-15 06:12:17,267] INFO [Log partition=winlogbeat-0, dir=/tmp/kafka-logs] Scheduling log segment [baseOffset 0, size 832566] for deletion. (kafka.log.Log)
[2019-07-15 06:12:17,268] INFO [Log partition=winlogbeat-0, dir=/tmp/kafka-logs] Incrementing log start offset to 3722 (kafka.log.Log)
[2019-07-15 06:12:23,449] INFO [GroupCoordinator 1]: Member logstash-0-128e19ac-d5e2-4295-9093-8b8344297299 in group helk_logstash has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2019-07-15 06:12:23,452] INFO [GroupCoordinator 1]: Preparing to rebalance group helk_logstash in state PreparingRebalance with old generation 5 (__consumer_offsets-22) (reason: removing member logstash-0-128e19ac-d5e2-4295-9093-8b8344297299 on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator)
[2019-07-15 06:12:23,454] INFO [GroupCoordinator 1]: Group helk_logstash with generation 6 is now empty (__consumer_offsets-22) (kafka.coordinator.group.GroupCoordinator)
[2019-07-15 06:13:17,269] INFO [Log partition=winlogbeat-0, dir=/tmp/kafka-logs] Deleting segment 0 (kafka.log.Log)
[2019-07-15 06:13:17,270] INFO Deleted log /tmp/kafka-logs/winlogbeat-0/00000000000000000000.log.deleted. (kafka.log.LogSegment)
[2019-07-15 06:13:17,313] INFO Deleted offset index /tmp/kafka-logs/winlogbeat-0/00000000000000000000.index.deleted. (kafka.log.LogSegment)
[2019-07-15 06:13:17,313] INFO Deleted time index /tmp/kafka-logs/winlogbeat-0/00000000000000000000.timeindex.deleted. (kafka.log.LogSegment)
[2019-07-15 06:13:53,132] INFO [GroupCoordinator 1]: Preparing to rebalance group helk_logstash in state PreparingRebalance with old generation 6 (__consumer_offsets-22) (reason: Adding new member logstash-0-5bd88a73-c5a7-4c22-a9d9-f74dec81e18b) (kafka.coordinator.group.GroupCoordinator)
[2019-07-15 06:13:55,136] INFO [GroupCoordinator 1]: Stabilized group helk_logstash generation 7 (__consumer_offsets-22) (kafka.coordinator.group.GroupCoordinator)
[2019-07-15 06:13:55,144] INFO [GroupCoordinator 1]: Assignment received from leader for group helk_logstash for generation 7 (kafka.coordinator.group.GroupCoordinator)
[2019-07-15 06:15:58,457] WARN Client session timed out, have not heard from server in 8012ms for sessionid 0x100000178d90000 (org.apache.zookeeper.ClientCnxn)
[2019-07-15 06:15:58,470] INFO Client session timed out, have not heard from server in 8012ms for sessionid 0x100000178d90000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2019-07-15 06:16:00,193] INFO Opening socket connection to server helk-zookeeper/172.18.0.8:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-07-15 06:16:00,194] INFO Socket connection established to helk-zookeeper/172.18.0.8:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-07-15 06:16:00,196] WARN Unable to reconnect to ZooKeeper service, session 0x100000178d90000 has expired (org.apache.zookeeper.ClientCnxn)
[2019-07-15 06:16:00,196] INFO Unable to reconnect to ZooKeeper service, session 0x100000178d90000 has expired, closing socket connection (org.apache.zookeeper.ClientCnxn)
[2019-07-15 06:16:00,197] INFO EventThread shut down for session: 0x100000178d90000 (org.apache.zookeeper.ClientCnxn)
[2019-07-15 06:16:00,197] INFO [ZooKeeperClient] Session expired. (kafka.zookeeper.ZooKeeperClient)
[2019-07-15 06:16:00,200] INFO [ZooKeeperClient] Initializing a new session to helk-zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
[2019-07-15 06:16:00,200] INFO Initiating client connection, connectString=helk-zookeeper:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@703580bf (org.apache.zookeeper.ZooKeeper)
[2019-07-15 06:16:00,204] INFO Opening socket connection to server helk-zookeeper/172.18.0.8:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-07-15 06:16:00,205] INFO Socket connection established to helk-zookeeper/172.18.0.8:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-07-15 06:16:00,206] INFO Session establishment complete on server helk-zookeeper/172.18.0.8:2181, sessionid = 0x100000178d90004, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-07-15 06:16:00,207] INFO Processing notification(s) to /config/changes (kafka.common.ZkNodeChangeNotificationListener)
[2019-07-15 06:16:00,208] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
[2019-07-15 06:16:00,210] INFO Stat of the created znode at /brokers/ids/1 is: 251,251,1563171360209,1563171360209,1,0,0,72057600360382468,196,0,251
 (kafka.zk.KafkaZkClient)
[2019-07-15 06:16:00,210] INFO Registered broker 1 at path /brokers/ids/1 with addresses: ArrayBuffer(EndPoint(192.168.1.116,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 251 (kafka.zk.KafkaZkClient)
sidomir7 commented 5 years ago

Hi, you can close this I manage to solve the problem. Problem was on my side, during the installation I had problem with internet connection. So problem was on my side. I reinstall helk again today and it is running fine.