deviantony / docker-elk

The Elastic stack (ELK) powered by Docker and Compose.
MIT License
17.07k stars 6.74k forks source link

logstash listeners won't show up in 'netstat -na' #624

Closed ewitkop-panw closed 2 years ago

ewitkop-panw commented 3 years ago

Problem description

UDP/TCP listeners won't work in logstash configuration files. A listener works great in filebeat. The larger problem that I have is that I cannot parse out KV pairs from the messages portion of the syslog. I need to add a filter, which can be done in logstash. But I am sending my logs to filebeat. So I am a bit stuck. How can I get a listener to work in logstash?

Extra information

This is the exact issue I have.

https://discuss.elastic.co/t/parse-syslog-from-syslog/168291

antoineco commented 3 years ago

@ewitkop-panw could you please share your Logstash configuration? There is a TCP input enabled by default, and it definitely works, it is used in our automated tests.

By the way, netstat won't show you listening ports because Logstash is running inside a container.

ewitkop-panw commented 3 years ago

I am able to get FB to connect to Logstash on 5044, but the logs don't seem to flow through the connection.

2021-09-17T23:50:48.824Z    INFO    [publisher_pipeline_output] pipeline/output.go:143  Connecting to backoff(async(tcp://localhost:5044))
2021-09-17T23:50:48.824Z    INFO    [publisher] pipeline/retry.go:219   retryer: send unwait signal to consumer
2021-09-17T23:50:48.824Z    INFO    [publisher] pipeline/retry.go:223     done
2021-09-17T23:50:48.824Z    INFO    [publisher_pipeline_output] pipeline/output.go:151  Connection to backoff(async(tcp://localhost:5044)) established

Filebeat is installed on the host Ubuntu box.

root@ip-192-168-45-101:~/docker-elk/logstash/pipeline# cat logstash.conf 
input {
  beats {
    port => 5044
    codec => json
    ssl => false
    client_inactivity_timeout => 30000
}
}

## Add your filters / logstash plugins configuration here

output {
    elasticsearch {
        #hosts => ["https://192.168.45.101:9200"]
        hosts => ["https://elasticsearch:9200"]
        user => "elastic"
        password => "changeme"
        ecs_compatibility => disabled
        ssl => true
        cacert => "config/elasticsearch-ca.pem"
    }
}
root@ip-192-168-45-101:~# more  /etc/filebeat/filebeat.yml

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

- type: syslog
  enabled: true
  protocol.udp:
    host: "192.168.45.101:514" # IP:Port of host receiving syslog traffic

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3

#============================== Dashboards =====================================

setup.dashboards.enabled: true

#============================== Kibana =====================================

setup.kibana:

  host: "192.168.45.101:5601"
  ssl.enabled: false
  username: elastic
  password: changeme

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]
  #ssl.verification_mode: none

#----------------------------- Logstash output --------------------------------
output.logstash:

  hosts: ["localhost:5044"]
  timeout: 300
  ssl.enabled: false

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
ewitkop-panw commented 3 years ago
root@ip-192-168-45-101:~/docker-elk# more docker-compose.yml 
version: '3.2'

services:
  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
      # (!) TLS certificates. Generate using instructions from tls/README.md.
      - type: bind
        source: ./tls/elasticsearch/elasticsearch.p12
        target: /usr/share/elasticsearch/config/elasticsearch.p12
        read_only: true
      - type: bind
        source: ./tls/elasticsearch/http.p12
        target: /usr/share/elasticsearch/config/http.p12
        read_only: true
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      ELASTIC_PASSWORD: changeme
      # Use single node discovery in order to disable production mode and avoid bootstrap checks.
      # see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
      discovery.type: single-node
    networks:
      - elk

  logstash:
    build:
      context: logstash/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./logstash/config/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      - type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
      # (!) CA certificate. Generate using instructions from tls/README.md
      - type: bind
        source: ./tls/kibana/elasticsearch-ca.pem
        target: /usr/share/logstash/config/elasticsearch-ca.pem
        read_only: true
    ports:
      - "5044:5044"
      - "5000:5000/tcp"
      - "5000:5000/udp"
      - "9600:9600"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./kibana/config/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
      - type: bind
        source: ./tls/kibana/elasticsearch-ca.pem
        target: /usr/share/kibana/config/elasticsearch-ca.pem
        read_only: true
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

networks:
  elk:
    driver: bridge

volumes:
  elasticsearch:
ewitkop-panw commented 3 years ago
filebeat version 7.14.1 (amd64), libbeat 7.14.1 [703d589a09cfdbfd7f84c1d990b50b6b7f62ac29 built 2021-08-26 09:12:57 +0000 UTC]
ewitkop-panw commented 2 years ago

I am making progress. I rebooted the host and the 5044 stayed up. I changed the codec on the logstash side and I started seeing parsing errors of my firewall logs. This was good news.

Now it appears that logstash is simply not creating an index in kibana. I am trying to solve that now.

antoineco commented 2 years ago

Glad to see you're making progress 👍

Logstash creates indices in Elasticsearch, not in Kibana. By default, a new index is created at the beginning of every day. In case you meant the Kibana index pattern, that is true, Logstash does not create it, you must do it manually by following the instructions from the README.

antoineco commented 2 years ago

By the way, is there any reason why you are sending data to Logstash and not directly to Elasticsearch?

If you are not performing any filtering or data transformation in your Logstash pipeline, you may as well simplify the data flow by skipping Logstash altogether. (If you enabled TLS, you will need to copy the CA certificate to your Beat nodes for it to work. Example)

ewitkop-panw commented 2 years ago

I plan on spring filtering once this is all working. My Palo Alto firewall logs have all the data in Messages and they are not broken out into KV pairs.

On Sat, Sep 18, 2021 at 6:10 AM Antoine Cotten @.***> wrote:

By the way, is there any reason why you are sending data to Logstash and not directly to Elasticsearch?

If you are not performing any filtering or data transformation in your Logstash pipeline, you may as well simplify the data flow by skipping Logstash altogether. (If you enabled TLS, you will need to copy the CA certificate to your Beat nodes for it to work. Example https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_deviantony_docker-2Delk_blob_15e59455b2cf54d09e471d7ef309822b42f7e9ca_extensions_metricbeat_config_metricbeat.yml-23L38-2DL39&d=DwMCaQ&c=V9IgWpI5PvzTw83UyHGVSoW3Uc1MFWe5J8PTfkrzVSo&r=bVyA4ImNgL43Q2UXlCGr3sMBg1TcSClSpBCjQ8q9PH8&m=_LF9KXG4zooVh9xQo7bNg93ULhQtNU_WlQ0w8TLskiQ&s=vsoxHgdJ60Lhayj8MKEZpc8VCR8ud1blQEX7MychPuk&e= )

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_deviantony_docker-2Delk_issues_624-23issuecomment-2D922253324&d=DwMCaQ&c=V9IgWpI5PvzTw83UyHGVSoW3Uc1MFWe5J8PTfkrzVSo&r=bVyA4ImNgL43Q2UXlCGr3sMBg1TcSClSpBCjQ8q9PH8&m=_LF9KXG4zooVh9xQo7bNg93ULhQtNU_WlQ0w8TLskiQ&s=K1y5_xTO85rPAVsqiW1c230jz2ePmllWXlUZ0iLmzTc&e=, or unsubscribe https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AMNB64C73SLKV7OBG4IAZBTUCRQP5ANCNFSM5EHC4OIQ&d=DwMCaQ&c=V9IgWpI5PvzTw83UyHGVSoW3Uc1MFWe5J8PTfkrzVSo&r=bVyA4ImNgL43Q2UXlCGr3sMBg1TcSClSpBCjQ8q9PH8&m=_LF9KXG4zooVh9xQo7bNg93ULhQtNU_WlQ0w8TLskiQ&s=YaejtAdlIDr00aF64RY73x_ZIWKkHtmbG8KIBRCiLg4&e= . Triage notifications on the go with GitHub Mobile for iOS https://urldefense.proofpoint.com/v2/url?u=https-3A__apps.apple.com_app_apple-2Dstore_id1477376905-3Fct-3Dnotification-2Demail-26mt-3D8-26pt-3D524675&d=DwMCaQ&c=V9IgWpI5PvzTw83UyHGVSoW3Uc1MFWe5J8PTfkrzVSo&r=bVyA4ImNgL43Q2UXlCGr3sMBg1TcSClSpBCjQ8q9PH8&m=_LF9KXG4zooVh9xQo7bNg93ULhQtNU_WlQ0w8TLskiQ&s=FcT51y0ZUWyk7BdmmtLPLogFVLKkDzj6-mJT6mbOY8g&e= or Android https://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.github.android-26referrer-3Dutm-5Fcampaign-253Dnotification-2Demail-2526utm-5Fmedium-253Demail-2526utm-5Fsource-253Dgithub&d=DwMCaQ&c=V9IgWpI5PvzTw83UyHGVSoW3Uc1MFWe5J8PTfkrzVSo&r=bVyA4ImNgL43Q2UXlCGr3sMBg1TcSClSpBCjQ8q9PH8&m=_LF9KXG4zooVh9xQo7bNg93ULhQtNU_WlQ0w8TLskiQ&s=vcsKGrDLZTm2vaNgB9y3MRJhLzFDqtxV392Ywri0AcI&e=.

--

Erik Witkop | DOD Systems Engineer | PCNSE CISSP GPEN

Palo Alto Networks | Boston, MA | USA

Mobile: 603-490-3737 | www.paloaltonetworks.com https://www.paloaltonetworks.com/

https://www.paloaltonetworks.com/ https://www.linkedin.com/company/palo-alto-networks https://www.facebook.com/PaloAltoNetworks/ https://twitter.com/PaloAltoNtwks

The content of this message is the proprietary and confidential property of Palo Alto Networks, and should be treated as such. If you are not the intended recipient and have received this message in error, please delete this message from your computer system and notify me immediately by e-mail. Any unauthorized use or distribution of the content of this message is prohibited.

ewitkop-panw commented 2 years ago

So I can confirm that logs are coming from FB to logstash. If you change the logstash codec to JSON or something, you will see a parsing error and you can see the actual syslog message. So that part is working. And now I have logstash talking to elasticsearch. However, I don't see anything on the "Logs" page of Kibaba.

logstash_1       | [2021-09-20T13:37:19,968][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1       | [2021-09-20T13:37:19,977][WARN ][logstash.outputs.elasticsearch][main] ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
logstash_1       | ** WARNING ** You have enabled encryption but DISABLED certificate verification.
logstash_1       | ** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true
logstash_1       | [2021-09-20T13:37:20,016][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@elasticsearch:9200/]}}
kibana_1         | {"type":"log","@timestamp":"2021-09-20T13:37:20+00:00","tags":["info","savedobjects-service"],"pid":952,"message":"Starting saved objects migrations"}
logstash_1       | [2021-09-20T13:37:20,648][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@elasticsearch:9200/"}
logstash_1       | [2021-09-20T13:37:20,732][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.13.4) {:es_version=>7}
logstash_1       | [2021-09-20T13:37:20,734][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
root@ip-192-168-45-101:~/docker-elk/logstash/pipeline# more logstash.conf 
input {
  beats {
    port => 5044
    ssl => false
    client_inactivity_timeout => 300
    codec => plain
}
}

############ FILTERS ###############

#filter {
#    dissect { mapping => { "message" => "<%{level}>%{} %{ts} %{loglevel} %{something} - [%{kvString}" } }
#    date { match => [ "ts", "ISO8601" ] }
#    kv { source => "kvString" field_split => ";" value_split => ":" trim_key => " " }
#}

## Add your filters / logstash plugins configuration here

output {
   elasticsearch {
        hosts => ["elasticsearch:9200"]
        user => "elastic"
        password => "changeme"
        ecs_compatibility => disabled
        ssl => true
        cacert => "config/elasticsearch-ca.pem"
                ssl_certificate_verification => false
    }
ewitkop-panw commented 2 years ago

It is working now!!!!!! I need to make a few changes to my logstash.conf file. Here are the final results.

input {
  beats {
    port => 5044
    ssl => false
    client_inactivity_timeout => 300
   # codec => plain
}
}

############ FILTERS ###############

#filter {
#    dissect { mapping => { "message" => "<%{level}>%{} %{ts} %{loglevel} %{something} - [%{kvString}" } }
#    date { match => [ "ts", "ISO8601" ] }
#    kv { source => "kvString" field_split => ";" value_split => ":" trim_key => " " }
#}

## Add your filters / logstash plugins configuration here

output {
   elasticsearch {
        hosts => ["elasticsearch:9200"]
        user => "elastic"
        password => "changeme"
        ecs_compatibility => disabled
        ssl => true
        cacert => "config/elasticsearch-ca.pem"
                ssl_certificate_verification => false
                data_stream => "true"
    }

stdout {
   codec => rubydebug
}
antoineco commented 2 years ago

@ewitkop-panw sweet! So you didn't need to configure the codec at all, right?

If this issue is solved, feel free to close it.

ewitkop-panw commented 2 years ago

Filebeat is my syslog listener. Then I send logs over to logstash.