Cyb3rWard0g / HELK

The Hunting ELK
GNU General Public License v3.0
3.75k stars 680 forks source link

Not pushing zeek logs to dashboard #492

Closed hartescout closed 3 years ago

hartescout commented 4 years ago

Describe the problem

I am unable to read data from Zeek 3.2.0 running on Ubuntu 18.04. Zeek filebeat from repo looks to be connecting and pushing data. It's not visualizing. I probably am missing a setting in Kibana, but can't figure it out. Zeek filebeat module enabled and configured to correct log directory. Most likely code in filebeat.yml?

Winlogbeat.yml is pushing Sysmon logs just fine from my other endpoints with same minimal filebeat.yml.

HELK install on a Linode instance 6 cores shared, 16gb ram.

Provide the output of the following commands

- VERSION="18.04.5 LTS (Bionic Beaver)"
- ID=ubuntu
- ID_LIKE=debian
- PRETTY_NAME="Ubuntu 18.04.5 LTS"`

**Docker Space:**
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda        315G   17G  283G   6% /
Memory:
              total        used        free      shared  buff/cache   available
Mem:             15          11           0           0           4           4
Swap:             0           0           0
Cores:
6  
Get output of the HELK docker containers:  
CONTAINER ID        IMAGE                                                 COMMAS
667d0e712100        confluentinc/cp-ksql-cli:5.1.3                        "/bini
eb869ba37e1f        confluentinc/cp-ksql-server:5.1.3                     "/etcr
a63b9a464346        otrf/helk-kafka-broker:2.4.0                          "./kar
c70112a05fa8        otrf/helk-spark-worker:2.4.5                          "./spr
35c037d91827        otrf/helk-zookeeper:2.4.0                             "./zor
412da3313034        otrf/helk-spark-master:2.4.5                          "./spr
311615f80d27        docker_helk-jupyter                                   "/optr
d6806b612c24        otrf/helk-elastalert:0.4.0                            "./elt
025baed0e98a        otrf/helk-nginx:0.3.0                                 "/optx
3d77b35f33cd        otrf/helk-logstash:7.6.2.1                            "/usrh
169a57701a52        docker.elastic.co/kibana/kibana:7.6.2                 "/usra
26886b101d3f        docker.elastic.co/elasticsearch/elasticsearch:7.6.2   "/usrh
8b41f15003ccfd3435f36c830cff4da75ba66de0
filebeat version 7.8.1 (amd64), libbeat 7.8.1 [94f7632be5d56a7928595da79f4b829ffe123744 built 2020-07-21 15:12:45 +0000 UTC]

Filebeat.yml

###################### Filebeat Zeek/Corelight Configuration Example #########################
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-reference-yml.html
#----------------------------- Input Logs --------------------------------
filebeat.inputs:
- type: log
  enabled: true
  # Change this to the directory of where your Zeek logs are stored
  paths:
    - /var/log/bro/current/*.log
  #json.keys_under_root: true
  #fields_under_root: true
#----------------------------- Kafka output --------------------------------
output.kafka:
  # Place your HELK IP(s) here (keep the port).
  hosts: ["***.***.***.***:9092"]
  topic: "zeek"
  max_message_bytes: 1000000

Output of filebeat - e

2020-08-17T01:58:28.396-0700    INFO    instance/beat.go:310    Setup Beat: filebeat; Version: 7.8.1
2020-08-17T01:58:28.398-0700    INFO    [publisher] pipeline/module.go:113  Beat name: udock
2020-08-17T01:58:28.417-0700    WARN    beater/filebeat.go:156  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-08-17T01:58:28.417-0700    INFO    [monitoring]    log/log.go:118  Starting metrics logging every 30s
2020-08-17T01:58:28.417-0700    INFO    instance/beat.go:463    filebeat start running.
2020-08-17T01:58:28.418-0700    WARN    beater/filebeat.go:339  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-08-17T01:58:28.418-0700    INFO    registrar/registrar.go:145  Loading registrar data from /var/lib/filebeat/registry/filebeat/data.json
2020-08-17T01:58:28.420-0700    INFO    registrar/registrar.go:152  States Loaded from registrar: 18
2020-08-17T01:58:28.420-0700    INFO    [crawler]   beater/crawler.go:71    Loading Inputs: 1
2020-08-17T01:58:28.554-0700    INFO    log/input.go:152    Configured paths: [/var/log/bro/current/*.log]
2020-08-17T01:58:28.554-0700    INFO    [crawler]   beater/crawler.go:141   Starting input (ID: 15126813079538667792)
2020-08-17T01:58:28.554-0700    INFO    [crawler]   beater/crawler.go:108   Loading and starting Inputs completed. Enabled inputs: 1
2020-08-17T01:58:28.567-0700    INFO    log/harvester.go:297    Harvester started for file: /var/log/bro/current/conn.log
2020-08-17T01:58:28.576-0700    INFO    log/harvester.go:297    Harvester started for file: /var/log/bro/current/dns.log
2020-08-17T01:58:29.576-0700    INFO    [publisher_pipeline_output] pipeline/output.go:144  Connecting to kafka(74.207.254.114:9092)
2020-08-17T01:58:29.577-0700    INFO    [publisher] pipeline/retry.go:221   retryer: send unwait signal to consumer
2020-08-17T01:58:29.577-0700    INFO    [publisher] pipeline/retry.go:225     done
2020-08-17T01:58:29.577-0700    INFO    [publisher_pipeline_output] pipeline/output.go:152  Connection to kafka(74.207.254.114:9092) established
2020-08-17T01:58:38.581-0700    INFO    log/harvester.go:297    Harvester started for file: /var/log/bro/current/weird.log
2020-08-17T01:58:38.582-0700    INFO    log/harvester.go:297    Harvester started for file: /var/log/bro/current/ssl.log
^C2020-08-17T01:58:47.811-0700  INFO    beater/filebeat.go:456  Stopping filebeat
2020-08-17T01:58:47.812-0700    INFO    beater/crawler.go:148   Stopping Crawler
2020-08-17T01:58:47.812-0700    INFO    beater/crawler.go:158   Stopping 1 inputs
2020-08-17T01:58:47.812-0700    INFO    [crawler]   beater/crawler.go:163   Stopping input: 15126813079538667792
2020-08-17T01:58:47.812-0700    INFO    input/input.go:138  input ticker stopped
2020-08-17T01:58:47.812-0700    INFO    log/harvester.go:320    Reader was closed: /var/log/bro/current/dns.log. Closing.
2020-08-17T01:58:47.812-0700    INFO    log/harvester.go:320    Reader was closed: /var/log/bro/current/ssl.log. Closing.
2020-08-17T01:58:47.812-0700    INFO    log/harvester.go:320    Reader was closed: /var/log/bro/current/conn.log. Closing.
2020-08-17T01:58:47.812-0700    INFO    log/harvester.go:320    Reader was closed: /var/log/bro/current/weird.log. Closing.
2020-08-17T01:58:47.813-0700    INFO    beater/crawler.go:178   Crawler stopped
2020-08-17T01:58:47.813-0700    INFO    registrar/registrar.go:367  Stopping Registrar
2020-08-17T01:58:47.813-0700    INFO    registrar/registrar.go:293  Ending Registrar
2020-08-17T01:58:47.830-0700    INFO    [monitoring]    log/log.go:153  Total non-zero metrics  {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":170,"time":{"ms":172}},"total":{"ticks":410,"time":{"ms":412},"value":410},"user":{"ticks":240,"time":{"ms":240}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":9},"info":{"ephemeral_id":"2b2a2d7a-3f73-4655-9a2b-cc3f5d637c7e","uptime":{"ms":19510}},"memstats":{"gc_next":14224992,"memory_alloc":11437776,"memory_total":24584392,"rss":58589184},"runtime":{"goroutines":24}},"filebeat":{"events":{"active":1,"added":45,"done":44},"harvester":{"closed":4,"open_files":0,"running":0,"started":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":22,"batches":9,"total":22},"type":"kafka"},"outputs":{"kafka":{"bytes_read":741,"bytes_write":7912}},"pipeline":{"clients":0,"events":{"active":1,"filtered":22,"published":23,"retry":4,"total":45},"queue":{"acked":22}}},"registrar":{"states":{"current":18,"update":44},"writes":{"success":30,"total":30}},"system":{"cpu":{"cores":4},"load":{"1":1.12,"15":1.58,"5":1.45,"norm":{"1":0.28,"15":0.395,"5":0.3625}}}}}}
2020-08-17T01:58:47.830-0700    INFO    [monitoring]    log/log.go:154  Uptime: 19.521008138s
2020-08-17T01:58:47.831-0700    INFO    [monitoring]    log/log.go:131  Stopping metrics logging.
2020-08-17T01:58:47.831-0700    INFO    instance/beat.go:469    filebeat stopped.
Cyb3rWard0g commented 4 years ago

Hey @hartescout ! Quick question. I can see the Filebeat establishing a connection to Kafka. however, I do not see any log saying that it was able to send any logs right?

If it is actually sending logs, did you check all the indices from this output config? https://github.com/Cyb3rWard0g/HELK/blob/ebf25b5d2d04603af49258c789f4d72ab23c5e98/docker/helk-logstash/pipeline/9998-catch_all-output.conf

I wonder if it is in the indexme-* indices. maybe?

hartescout commented 4 years ago

I think that might be the issue. I haven't looked at the conf or indexme-* indices. I'll take a look and update. Thanks for the help!

hartescout commented 4 years ago

@Cyb3rWard0g Hey thanks for the help so far. Here is the readout from the first "indexme-*" search I did on the machine running HELK. I have not altered anything in any other files besides filebeat.yml

I'll keep tooling around. Hope this helps, let me know if you need any more output/data!

edit: Shoot, I just notice the password is default, 'elasticpassword'. Would I need to change that to what I configured when asked during install?

` # Not in schema yet else if [@metadata][helk_parsed] != "yes" and [source] != "/var/log/osquery/osqueryd.results.log" and [@metadat$

Zeek temporary not in schema

  if [event_log] == "zeek" {
    elasticsearch {
      hosts => ["helk-elasticsearch:9200"]
      index => "indexme-zeek-%{+YYYY.MM.dd}"
      # document_id => "%{[@metadata][log_hash]}"
      user => 'elastic'
      #password => 'elasticpassword'
    }

`

Cyb3rWard0g commented 4 years ago

the password line is a comment and does not apply to it. You mentioned that you were sharing the output of filebeat saying it connected properly witth the kafka broker from HELK. do you have that screenshot? I want to make sure the problem is not from Client -> HELK(Kafka broker) and not the pipeline itself.

hartescout commented 4 years ago

I still owe you an answer. I'll get that too you soon. Away from desk for a couple days.

neu5ron commented 3 years ago

no response - closing