StamusNetworks / SELKS

A Suricata based IDS/IPS/NSM distro
https://www.stamus-networks.com/open-source/#selks
GNU General Public License v3.0
1.46k stars 286 forks source link

SELKS6 console update - Kibana dashboard error #251

Open michal25 opened 4 years ago

michal25 commented 4 years ago

The script selks-upgrade_stamus

passed OK. But after running the script selks-first-time-setup_stamus

I received this error
Traceback (most recent call last):
  File "bin/manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
    utility.execute()
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 356, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute
    output = self.handle(*args, **options)
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/rules/management/commands/kibana_reset.py", line 38, in handle
    self.kibana_reset()
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/rules/es_data.py", line 1990, in kibana_reset
    self._kibana_request('/api/spaces/space', KIBANA6_NAMESPACE)
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/rules/es_data.py", line 1739, in _kibana_request
    urllib2.urlopen(req)
  File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 435, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 548, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 473, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 556, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 503: Service Unavailable
Dashboards loading set up job failed...Exiting...
### Exited with ERROR  ### 

And manually
# cd /usr/share/python/scirius/ && . bin/activate && python bin/manage.py kibana_reset && deactivate
Traceback (most recent call last):
  File "bin/manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
    utility.execute()
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 356, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute
    output = self.handle(*args, **options)
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/rules/management/commands/kibana_reset.py", line 38, in handle
    self.kibana_reset()
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/rules/es_data.py", line 1990, in kibana_reset
    self._kibana_request('/api/spaces/space', KIBANA6_NAMESPACE)
  File "/usr/share/python/scirius/local/lib/python2.7/site-packages/rules/es_data.py", line 1739, in _kibana_request
    urllib2.urlopen(req)
  File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 435, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 548, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 473, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 556, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 503: Service Unavailable

Reset SN dashboards from GUI also doesn't work

# selks-health-check_stamus 
● suricata.service - LSB: Next Generation IDS/IPS
   Loaded: loaded (/etc/init.d/suricata; generated)
   Active: active (running) since Wed 2020-08-26 13:01:34 CEST; 7min ago
     Docs: man:systemd-sysv-generator(8)
    Tasks: 10 (limit: 4915)
   Memory: 330.9M
   CGroup: /system.slice/suricata.service
           └─26399 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid --af-packet -D -v --user=logstash

Aug 26 13:01:34 SELKS2 systemd[1]: Starting LSB: Next Generation IDS/IPS...
Aug 26 13:01:34 SELKS2 suricata[26385]: Starting suricata in IDS (af-packet) mode... done.
Aug 26 13:01:34 SELKS2 systemd[1]: Started LSB: Next Generation IDS/IPS.
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-08-26 12:38:55 CEST; 29min ago
     Docs: https://www.elastic.co
 Main PID: 23840 (java)
    Tasks: 99 (limit: 4915)
   Memory: 4.6G
   CGroup: /system.slice/elasticsearch.service
           ├─23840 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch…
           └─24022 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

Aug 26 12:38:44 SELKS2 systemd[1]: Starting Elasticsearch...
Aug 26 12:38:55 SELKS2 systemd[1]: Started Elasticsearch.
● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-08-26 12:45:26 CEST; 23min ago
 Main PID: 24797 (java)
    Tasks: 43 (limit: 4915)
   Memory: 901.3M
   CGroup: /system.slice/logstash.service
           └─24797 /bin/java -Xms4g -Xmx4g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true…

Aug 26 12:45:43 SELKS2 logstash[24797]: [2020-08-26T12:45:43,904][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"t…
Aug 26 12:45:43 SELKS2 logstash[24797]: [2020-08-26T12:45:43,919][INFO ][logstash.outputs.elasticsearch][main] Installing elasticsearch template to _tem…ate/logstash
Aug 26 12:45:44 SELKS2 logstash[24797]: [2020-08-26T12:45:44,006][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logst…-City.mmdb"}
Aug 26 12:45:44 SELKS2 logstash[24797]: [2020-08-26T12:45:44,152][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/usr/share/logst…-City.mmdb"}
Aug 26 12:45:44 SELKS2 logstash[24797]: [2020-08-26T12:45:44,212][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers…
Aug 26 12:45:45 SELKS2 logstash[24797]: [2020-08-26T12:45:45,245][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"…onds"=>1.03}
Aug 26 12:45:45 SELKS2 logstash[24797]: [2020-08-26T12:45:45,460][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
Aug 26 12:45:45 SELKS2 logstash[24797]: [2020-08-26T12:45:45,525][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:…pelines=>[]}
Aug 26 12:45:45 SELKS2 logstash[24797]: [2020-08-26T12:45:45,559][INFO ][filewatch.observingtail  ][main][d4aef1d642dafd3cc0ec28e9e79530daa4bc5c58ba6b72… collections
Aug 26 12:45:45 SELKS2 logstash[24797]: [2020-08-26T12:45:45,922][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
Hint: Some lines were ellipsized, use -l to show in full.
● kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-08-26 12:38:55 CEST; 29min ago
 Main PID: 24070 (node)
    Tasks: 11 (limit: 4915)
   Memory: 737.7M
   CGroup: /system.slice/kibana.service
           └─24070 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli

Aug 26 12:50:39 SELKS2 kibana[24070]: {"type":"log","@timestamp":"2020-08-26T10:50:39Z","tags":["info","savedobjects-service"],"pid":24070,"message":"D…amespaces\""}
Aug 26 12:53:24 SELKS2 kibana[24070]: {"type":"log","@timestamp":"2020-08-26T10:53:24Z","tags":["info","savedobjects-service"],"pid":24070,"message":"D…amespaces\""}
Aug 26 12:53:26 SELKS2 kibana[24070]: {"type":"log","@timestamp":"2020-08-26T10:53:26Z","tags":["info","savedobjects-service"],"pid":24070,"message":"D…amespaces\""}
Aug 26 12:59:11 SELKS2 kibana[24070]: {"type":"log","@timestamp":"2020-08-26T10:59:11Z","tags":["info","savedobjects-service"],"pid":24070,"message":"D…amespaces\""}
Aug 26 12:59:12 SELKS2 kibana[24070]: {"type":"log","@timestamp":"2020-08-26T10:59:12Z","tags":["info","savedobjects-service"],"pid":24070,"message":"D…amespaces\""}
Aug 26 12:59:59 SELKS2 kibana[24070]: {"type":"log","@timestamp":"2020-08-26T10:59:59Z","tags":["info","savedobjects-service"],"pid":24070,"message":"D…amespaces\""}
Aug 26 13:00:01 SELKS2 kibana[24070]: {"type":"log","@timestamp":"2020-08-26T11:00:01Z","tags":["info","savedobjects-service"],"pid":24070,"message":"D…amespaces\""}
Aug 26 13:02:30 SELKS2 kibana[24070]: {"type":"log","@timestamp":"2020-08-26T11:02:30Z","tags":["info","savedobjects-service"],"pid":24070,"message":"D…amespaces\""}
Aug 26 13:06:15 SELKS2 kibana[24070]: {"type":"log","@timestamp":"2020-08-26T11:06:15Z","tags":["info","savedobjects-service"],"pid":24070,"message":"D…amespaces\""}
Aug 26 13:06:17 SELKS2 kibana[24070]: {"type":"log","@timestamp":"2020-08-26T11:06:17Z","tags":["info","savedobjects-service"],"pid":24070,"message":"D…amespaces\""}
Hint: Some lines were ellipsized, use -l to show in full.
● evebox.service - EveBox Server
   Loaded: loaded (/lib/systemd/system/evebox.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2020-08-25 15:16:20 CEST; 21h ago
 Main PID: 599 (evebox)
    Tasks: 8 (limit: 4915)
   Memory: 23.7M
   CGroup: /system.slice/evebox.service
           └─599 /usr/bin/evebox server

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
● molochviewer-selks.service - Moloch Viewer
   Loaded: loaded (/etc/systemd/system/molochviewer-selks.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-08-26 12:45:26 CEST; 23min ago
 Main PID: 24802 (sh)
    Tasks: 12 (limit: 4915)
   Memory: 43.7M
   CGroup: /system.slice/molochviewer-selks.service
           ├─24802 /bin/sh -c /data/moloch/bin/node viewer.js -c /data/moloch/etc/config.ini >> /data/moloch/logs/viewer.log 2>&1
           └─24815 /data/moloch/bin/node viewer.js -c /data/moloch/etc/config.ini

Aug 26 12:45:26 SELKS2 systemd[1]: Started Moloch Viewer.
Aug 26 12:49:11 SELKS2 systemd[1]: molochviewer-selks.service: Current command vanished from the unit file, execution of the command list won't be resumed.
● molochpcapread-selks.service - Moloch Pcap Read
   Loaded: loaded (/etc/systemd/system/molochpcapread-selks.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-08-26 12:45:26 CEST; 23min ago
 Main PID: 24811 (sh)
    Tasks: 6 (limit: 4915)
   Memory: 196.5M
   CGroup: /system.slice/molochpcapread-selks.service
           ├─24811 /bin/sh -c /data/moloch/bin/moloch-capture -c /data/moloch/etc/config.ini -m --copy --delete -R /data/nsm/  >> /data/moloch/logs/capture.log 2>&1
           └─24812 /data/moloch/bin/moloch-capture -c /data/moloch/etc/config.ini -m --copy --delete -R /data/nsm/

Aug 26 12:45:26 SELKS2 systemd[1]: Started Moloch Pcap Read.
Aug 26 12:49:11 SELKS2 systemd[1]: molochpcapread-selks.service: Current command vanished from the unit file, execution of the command list won't be resumed.
scirius                          RUNNING   pid 24089, uptime 0:29:50
ii  elasticsearch                   7.9.0                        amd64        Distributed RESTful search engine built for the cloud
ii  elasticsearch-curator           5.8.1                        amd64        Have indices in Elasticsearch? This is the tool for you!\n\nLike a museum curator manages the exhibits and collections on display, \nElasticsearch Curator helps you curate, or manage your indices.
ii  evebox                          1:0.11.1                     amd64        no description given
ii  kibana                          7.9.0                        amd64        Explore and visualize your Elasticsearch data
ii  kibana-dashboards-stamus        2020042401                   amd64        Kibana 6 dashboard templates.
ii  logstash                        1:7.9.0-1                    all          An extensible logging pipeline
ii  moloch                          2.4.0-1                      amd64        Moloch Full Packet System
ii  scirius                         3.5.0-3                      amd64        Django application to manage Suricata ruleset
ii  suricata                        1:2020082602-0stamus0        amd64        Suricata open source multi-thread IDS/IPS/NSM system.
Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs  7.8G     0  7.8G   0% /dev
tmpfs          tmpfs     1.6G  157M  1.5G  10% /run
/dev/md0       ext4      887G  585G  257G  70% /
tmpfs          tmpfs     7.8G     0  7.8G   0% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs     7.8G     0  7.8G   0% /sys/fs/cgroup
tmpfs          tmpfs     1.6G     0  1.6G   0% /run/user/1001
pevma commented 4 years ago

Can you open the Kibana page? It seems it is unavailable ?

michal25 commented 4 years ago

Kibana page results in white screen. No pictures, no text, no error message.

michal25 commented 4 years ago

Ah, error message

Kibana server is not ready yet

pevma commented 4 years ago

Did you make sure the ngingx config is up to date - https://github.com/StamusNetworks/SELKS/wiki/Kibana-did-not-load-properly ?

michal25 commented 4 years ago

Yes

I have exact this configuration file for nginx.

MV

Dne 26. 08. 20 v 21:19 Peter Manev napsal(a):

Did you make sure the ngingx config is up to date - https://github.com/StamusNetworks/SELKS/wiki/Kibana-did-not-load-properly ?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/StamusNetworks/SELKS/issues/251#issuecomment-681074319, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGUNEKOQI3VWOIVQSNPBMDSCVN4JANCNFSM4QLWP6DA.

-- Ing. Michal Vymazal Senior Cyber Security Architect

Linux Services CEO

vymazal@linuxservices.cz www.linuxservices.cz Office Computer

LinkedIn profile https://www.linkedin.com/in/linuxservices/

This mail can't contain any virus. I'm using only Open Source software.

pevma commented 4 years ago

Including the last two entrees ?

-- Regards, Peter Manev

On 27 Aug 2020, at 09:21, michal25 notifications@github.com wrote:

 Yes

I have exact this configuration file for nginx.

MV

Dne 26. 08. 20 v 21:19 Peter Manev napsal(a):

Did you make sure the ngingx config is up to date - https://github.com/StamusNetworks/SELKS/wiki/Kibana-did-not-load-properly ?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/StamusNetworks/SELKS/issues/251#issuecomment-681074319, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGUNEKOQI3VWOIVQSNPBMDSCVN4JANCNFSM4QLWP6DA.

-- Ing. Michal Vymazal Senior Cyber Security Architect

Linux Services CEO

vymazal@linuxservices.cz www.linuxservices.cz Office Computer

LinkedIn profile https://www.linkedin.com/in/linuxservices/

This mail can't contain any virus. I'm using only Open Source software. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

michal25 commented 4 years ago

Yes, exact copy of the configuration file.

M.

Dne 27. 08. 20 v 10:44 Peter Manev napsal(a):

Including the last two entrees ?

-- Regards, Peter Manev

On 27 Aug 2020, at 09:21, michal25 notifications@github.com wrote:

 Yes

I have exact this configuration file for nginx.

MV

Dne 26. 08. 20 v 21:19 Peter Manev napsal(a):

Did you make sure the ngingx config is up to date -

https://github.com/StamusNetworks/SELKS/wiki/Kibana-did-not-load-properly ?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub

https://github.com/StamusNetworks/SELKS/issues/251#issuecomment-681074319, or unsubscribe

https://github.com/notifications/unsubscribe-auth/ABGUNEKOQI3VWOIVQSNPBMDSCVN4JANCNFSM4QLWP6DA.

-- Ing. Michal Vymazal Senior Cyber Security Architect

Linux Services CEO

vymazal@linuxservices.cz www.linuxservices.cz Office Computer

LinkedIn profile https://www.linkedin.com/in/linuxservices/

This mail can't contain any virus. I'm using only Open Source software. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/StamusNetworks/SELKS/issues/251#issuecomment-681802658, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGUNEIG76YXMJ37RTRNTTLSCYMG5ANCNFSM4QLWP6DA.

-- Ing. Michal Vymazal Senior Cyber Security Architect

Linux Services CEO

vymazal@linuxservices.cz www.linuxservices.cz Office Computer

LinkedIn profile https://www.linkedin.com/in/linuxservices/

This mail can't contain any virus. I'm using only Open Source software.

michal25 commented 4 years ago

I tryied to reset the SN dashboards, but this step terminates with error.

The kibana server is not available.

nginx configuration file is this https://github.com/StamusNetworks/SELKS/wiki/Kibana-did-not-load-properly Screenshot_20200916_114448 Screenshot_20200916_114536

pevma commented 4 years ago

What is the output of selks-health-check_stamus?

michal25 commented 4 years ago

root@SELKS2:~# selks-health-check_stamus
● suricata.service - LSB: Next Generation IDS/IPS
Loaded: loaded (/etc/init.d/suricata; generated)
Active: active (running) since Wed 2020-09-16 11:32:17 CEST; 59min ago
Docs: man:systemd-sysv-generator(8)
Tasks: 10 (limit: 4915)
Memory: 390.7M
CGroup: /system.slice/suricata.service └─23105 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid --af-packet -D -v --user=logstash

Sep 16 11:32:17 SELKS2 systemd[1]: Starting LSB: Next Generation IDS/IPS... Sep 16 11:32:17 SELKS2 suricata[23092]: Starting suricata in IDS (af-packet) mode... done. Sep 16 11:32:17 SELKS2 systemd[1]: Started LSB: Next Generation IDS/IPS. ● elasticsearch.service - Elasticsearch Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-16 11:14:15 CEST; 1h 17min ago Docs: https://www.elastic.co Main PID: 21914 (java) Tasks: 90 (limit: 4915) Memory: 4.7G CGroup: /system.slice/elasticsearch.service ├─21914 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 … └─22098 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

Sep 16 11:14:05 SELKS2 systemd[1]: Starting Elasticsearch... Sep 16 11:14:15 SELKS2 systemd[1]: Started Elasticsearch. ● logstash.service - logstash Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-08-26 12:45:26 CEST; 2 weeks 6 days ago Main PID: 24797 (java) Tasks: 42 (limit: 4915) Memory: 763.9M CGroup: /system.slice/logstash.service └─24797 /bin/java -Xms4g -Xmx4g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djav…

Sep 16 00:28:24 SELKS2 logstash[24797]: at org.jruby.RubyProc.call(RubyProc.java:318) Sep 16 00:28:24 SELKS2 logstash[24797]: at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105) Sep 16 00:28:24 SELKS2 logstash[24797]: at java.base/java.lang.Thread.run(Thread.java:834) Sep 16 00:28:24 SELKS2 logstash[24797]: Caused by: java.io.IOException: No space left on device Sep 16 00:28:24 SELKS2 logstash[24797]: at java.base/java.io.FileOutputStream.writeBytes(Native Method) Sep 16 00:28:24 SELKS2 logstash[24797]: at java.base/java.io.FileOutputStream.write(FileOutputStream.java:354) Sep 16 00:28:24 SELKS2 logstash[24797]: at org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(Output….java:250) Sep 16 00:28:24 SELKS2 logstash[24797]: ... 83 more Sep 16 00:28:24 SELKS2 logstash[24797]: [2020-09-16T00:28:24,283][INFO ][logstash.outputs.elasticsearch][main][2ada9d36290a6a5138e7215602be65b629… Sep 16 00:28:24 SELKS2 logstash[24797]: [2020-09-16T00:28:24,286][INFO ][logstash.outputs.elasticsearch][main][2ada9d36290a6a5138e72156…:count=>2} Hint: Some lines were ellipsized, use -l to show in full. ● kibana.service - Kibana Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-16 11:14:15 CEST; 1h 17min ago Main PID: 22143 (node) Tasks: 11 (limit: 4915) Memory: 940.2M CGroup: /system.slice/kibana.service └─22143 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli

Sep 16 11:14:51 SELKS2 kibana[22143]: {"type":"log","@timestamp":"2020-09-16T09:14:51Z","tags":["info","savedobjects-service"],"pid":2…igrations"} Sep 16 11:14:51 SELKS2 kibana[22143]: {"type":"log","@timestamp":"2020-09-16T09:14:51Z","tags":["warning","plugins","reporting","config"],"pid":2… Sep 16 11:14:51 SELKS2 kibana[22143]: {"type":"log","@timestamp":"2020-09-16T09:14:51Z","tags":["info","savedobjects-service"],"pid":2…kibana_2."} Sep 16 11:14:51 SELKS2 kibana[22143]: {"type":"log","@timestamp":"2020-09-16T09:14:51Z","tags":["warning","savedobjects-service"],"pid":22143,"me… Sep 16 11:14:51 SELKS2 kibana[22143]: {"type":"log","@timestamp":"2020-09-16T09:14:51Z","tags":["warning","savedobjects-service"],"pid":22143,"me… Sep 16 11:17:38 SELKS2 kibana[22143]: {"type":"log","@timestamp":"2020-09-16T09:17:38Z","tags":["info","savedobjects-service"],"pid":2…espaces\""} Sep 16 11:33:02 SELKS2 kibana[22143]: {"type":"log","@timestamp":"2020-09-16T09:33:02Z","tags":["info","savedobjects-service"],"pid":2…espaces\""} Sep 16 11:33:03 SELKS2 kibana[22143]: {"type":"log","@timestamp":"2020-09-16T09:33:03Z","tags":["info","savedobjects-service"],"pid":2…espaces\""} Sep 16 11:42:22 SELKS2 kibana[22143]: {"type":"log","@timestamp":"2020-09-16T09:42:22Z","tags":["info","savedobjects-service"],"pid":2…espaces\""} Sep 16 11:42:23 SELKS2 kibana[22143]: {"type":"log","@timestamp":"2020-09-16T09:42:23Z","tags":["info","savedobjects-service"],"pid":2…espaces\""} Hint: Some lines were ellipsized, use -l to show in full. ● evebox.service - EveBox Server Loaded: loaded (/lib/systemd/system/evebox.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-08-25 15:16:20 CEST; 3 weeks 0 days ago Main PID: 599 (evebox) Tasks: 8 (limit: 4915) Memory: 21.6M CGroup: /system.slice/evebox.service └─599 /usr/bin/evebox server

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable. ● molochviewer-selks.service - Moloch Viewer Loaded: loaded (/etc/systemd/system/molochviewer-selks.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-16 11:14:18 CEST; 1h 17min ago Main PID: 22213 (sh) Tasks: 12 (limit: 4915) Memory: 79.6M CGroup: /system.slice/molochviewer-selks.service ├─22213 /bin/sh -c /data/moloch/bin/node viewer.js -c /data/moloch/etc/config.ini >> /data/moloch/logs/viewer.log 2>&1 └─22220 /data/moloch/bin/node viewer.js -c /data/moloch/etc/config.ini

Sep 16 11:14:18 SELKS2 systemd[1]: Started Moloch Viewer. Sep 16 11:17:20 SELKS2 systemd[1]: molochviewer-selks.service: Current command vanished from the unit file, execution of the command l…be resumed. Hint: Some lines were ellipsized, use -l to show in full. ● molochpcapread-selks.service - Moloch Pcap Read Loaded: loaded (/etc/systemd/system/molochpcapread-selks.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-16 11:14:18 CEST; 1h 17min ago Main PID: 22209 (sh) Tasks: 6 (limit: 4915) Memory: 185.6M CGroup: /system.slice/molochpcapread-selks.service ├─22209 /bin/sh -c /data/moloch/bin/moloch-capture -c /data/moloch/etc/config.ini -m --copy --delete -R /data/nsm/ >> /data/moloch/lo… └─22212 /data/moloch/bin/moloch-capture -c /data/moloch/etc/config.ini -m --copy --delete -R /data/nsm/

Sep 16 11:14:18 SELKS2 systemd[1]: Started Moloch Pcap Read. Sep 16 11:17:20 SELKS2 systemd[1]: molochpcapread-selks.service: Current command vanished from the unit file, execution of the command…be resumed. Hint: Some lines were ellipsized, use -l to show in full. scirius RUNNING pid 22164, uptime 1:17:53 ii elasticsearch 7.9.1 amd64 Distributed RESTful search engine built for the cloud ii elasticsearch-curator 5.8.1 amd64 Have indices in Elasticsearch? This is the tool for you!\n\nLike a museum curator manages the exhibits and collections on display, \nElasticsearch Curator helps you curate, or manage your indices. ii evebox 1:0.11.1 amd64 no description given ii kibana 7.9.1 amd64 Explore and visualize your Elasticsearch data ii kibana-dashboards-stamus 2020042401 amd64 Kibana 6 dashboard templates. ii logstash 1:7.9.1-1 all An extensible logging pipeline ii moloch 2.4.0-1 amd64 Moloch Full Packet System ii scirius 3.5.0-3 amd64 Django application to manage Suricata ruleset ii suricata 1:2020091301-0stamus0 amd64 Suricata open source multi-thread IDS/IPS/NSM system. Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 7.8G 0 7.8G 0% /dev tmpfs tmpfs 1.6G 157M 1.5G 10% /run /dev/md0 ext4 887G 519G 324G 62% / tmpfs tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1001

pevma commented 4 years ago

Sep 16 00:28:24 SELKS2 logstash[24797]: Caused by: java.io.IOException: No space left on device Seems this could be the reason.

Then you need to Clean up old logs from ES, restart ES etc..

michal25 commented 4 years ago

I think the problem is in kibana dashboards confguration. Look

START of first time setup script - Wed 16 Sep 2020 11:31:43 AM CEST

Setting up sniffing interface

Please supply a network interface(s) to set up SELKS Suricata IDPS thread detection on 0: enp0s31f6 1: enp1s0 2: lo Please type in interface or space delimited interfaces below and hit "Enter". Example: eth1 OR Example: eth1 eth2 eth3

Configure threat detection for INTERFACE(S):

The supplied network interface(s): enp0s31f6

DONE! FPC - Full Packet Capture. Suricata will rotate and delete the pcap captured files. FPC_Retain - Full Packet Capture with having Moloch's pcap retention/rotation. Keeps the pcaps as long as there is space available. None - disable packet capture

1) FPC 2) FPC_Retain 3) NONE Please choose an option. Type in a number and hit "Enter" Enable Full Pcacket Capture with pcap retaining

Starting Moloch DB set up

% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 404 100 404 0 0 394k 0 --:--:-- --:--:-- --:--:-- 394k {"cluster_name":"elasticsearch","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":91,"active_shards":91,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":97.84946236559139}

Setting up Moloch

WARNING elasticsearch health is 'yellow' instead of 'green', things may be broken

It is STRONGLY recommended that you stop ALL moloch captures and viewers before proceeding. Use 'db.pl http://localhost:9200 backup' to backup db first.

There is 1 elastic search data node, if you expect more please fix first before proceeding.

It appears this elastic search cluster already has moloch installed (version 64), this will delete ALL data in elastic search! (It does not delete the pcap files on disk.)

Type "INIT" to continue - do you want to erase everything?? Erasing Creating

Finished Found interfaces: enp0s31f6;enp1s0;lo Semicolon ';' seperated list of interfaces to monitor [eth1] Install Elasticsearch server locally for demo, must have at least 3G of memory, NOT recommended for production use (yes or no) [no] Elasticsearch server URL [http://localhost:9200] Password to encrypt S2S and other things [no-default] Moloch - Creating configuration files Not overwriting /data/moloch/etc/config.ini, delete and run again if update required (usually not), or edit by hand Installing systemd start files, use systemctl Download GEO files? (yes or no) [yes] Moloch - Downloading GEO files 2020-09-16 11:32:27 URL:https://raw.githubusercontent.com/wireshark/wireshark/master/manuf [1750878/1750878] -> "oui.txt" [1]

Moloch - Configured - Now continue with step 4 in /data/moloch/README.txt

  /sbin/start elasticsearch # for upstart/Centos 6/Ubuntu 14.04
  systemctl start elasticsearch.service # for systemd/Centos 7/Ubuntu 16.04

5) Initialize/Upgrade Elasticsearch Moloch configuration a) If this is the first install, or want to delete all data /data/moloch/db/db.pl http://ESHOST:9200 init b) If this is an update to moloch package /data/moloch/db/db.pl http://ESHOST:9200 upgrade 6) Add an admin user if a new install or after an init /data/moloch/bin/moloch_add_user.sh admin "Admin User" THEPASSWORD --admin 7) Start everything a) If using upstart (Centos 6 or sometimes Ubuntu 14.04): /sbin/start molochcapture /sbin/start molochviewer b) If using systemd (Centos 7 or Ubuntu 16.04 or sometimes Ubuntu 14.04) systemctl start molochcapture.service systemctl start molochviewer.service 8) Look at log files for errors /data/moloch/logs/viewer.log /data/moloch/logs/capture.log 9) Visit http://MOLOCHHOST:8005 with your favorite browser. user: admin password: THEPASSWORD from step #6

If you want IP -> Geo/ASN to work, you need to setup a maxmind account and the geoipupdate program. See https://molo.ch/faq#maxmind

Any configuration changes can be made to /data/moloch/etc/config.ini See https://molo.ch/faq#moloch-is-not-working for issues

Additional information can be found at:

Setting up Moloch configs and services

Would you like to setup a retention policy now? (y/n)

Please specify the maximum file size in Gigabytes. The disk should have room for at least 10 times the specified value. (default is 12)

Setting maxFileSizeG to 15 Gigabyte.

Please specify the maximum rotation time in minutes. (default is none)

Setting maxFileTimeM to 600 minutes.

Setting up and restarting services

Setting up Scirius/Moloch proxy user

Added Traceback (most recent call last): File "bin/manage.py", line 10, in execute_from_command_line(sys.argv) File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/init.py", line 364, in execute_from_command_line utility.execute() File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/init.py", line 356, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, *cmd_options) File "/usr/share/python/scirius/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(args, *options) File "/usr/share/python/scirius/local/lib/python2.7/site-packages/rules/management/commands/kibana_reset.py", line 38, in handle self.kibana_reset() File "/usr/share/python/scirius/local/lib/python2.7/site-packages/rules/es_data.py", line 1990, in kibana_reset self._kibana_request('/api/spaces/space', KIBANA6_NAMESPACE) File "/usr/share/python/scirius/local/lib/python2.7/site-packages/rules/es_data.py", line 1739, in _kibana_request urllib2.urlopen(req) File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 435, in open response = meth(req, response) File "/usr/lib/python2.7/urllib2.py", line 548, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python2.7/urllib2.py", line 473, in error return self._call_chain(args) File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 556, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 503: Service Unavailable Dashboards loading set up job failed...Exiting...

Exited with ERROR

FINISH of first time setup script - Wed 16 Sep 2020 11:33:22 AM CEST

Exited with FAILED Full log located at - /opt/selks/log/selks-first-time-setup_stamus.log Press enter to continue

pevma commented 4 years ago

Is it possible to try to solve the full disk issue first to remove it as a point?

michal25 commented 4 years ago

Problem is - disk is not full.

/dev/md0 ext4 887G 519G 324G 62% /

The disk has 324GB free from any 887GB.

So, the logstash Error about disk full is any false positive message.

michal25 commented 4 years ago

I tried to reboot the SELKS device, but now is totally dead (grub_calloc not found error). So, first I will repair the grub error and restart the device.

michal25 commented 4 years ago

I repaired the grub data and rebooted the SELKS device.

Still the same error.

Hard disk have space enough (835 GB)

scirius                          RUNNING   pid 857, uptime 0:18:48
ii  elasticsearch                   7.9.1                        amd64        Distributed RESTful search engine built for the cloud
ii  elasticsearch-curator           5.8.1                        amd64        Have indices in Elasticsearch? This is the tool for you!\n\nLike a museum curator manages the exhibits and collections on display, \nElasticsearch Curator helps you curate, or manage your indices.
ii  evebox                          1:0.11.1                     amd64        no description given
ii  kibana                          7.9.1                        amd64        Explore and visualize your Elasticsearch data
ii  kibana-dashboards-stamus        2020042401                   amd64        Kibana 6 dashboard templates.
ii  logstash                        1:7.9.1-1                    all          An extensible logging pipeline
ii  moloch                          2.4.0-1                      amd64        Moloch Full Packet System
ii  scirius                         3.5.0-3                      amd64        Django application to manage Suricata ruleset
ii  suricata                        1:2020091301-0stamus0        amd64        Suricata open source multi-thread IDS/IPS/NSM system.
Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs  7.8G     0  7.8G   0% /dev
tmpfs          tmpfs     1.6G  8.9M  1.6G   1% /run
/dev/md0       ext4      887G  6.9G  835G   1% /
tmpfs          tmpfs     7.8G     0  7.8G   0% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs     7.8G     0  7.8G   0% /sys/fs/cgroup
tmpfs          tmpfs     1.6G     0  1.6G   0% /run/user/1001

Any idea?
pevma commented 4 years ago

Can you please share the last 100 lines form the logs of elasticsearch and logstash ?
I am trying to figure out if this is not an issue - https://stackoverflow.com/questions/34911181/how-to-undo-setting-elasticsearch-index-to-readonly

michal25 commented 4 years ago

Here is elasticsearch

root@SELKS2:/var/log/elasticsearch# tail -100 elasticsearch.log [2020-09-27T02:00:01,333][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:00:01,334][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:00:01,404][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-flow-2020.09.27/yAy9Z-_IQhaAbhMCq9BU5w] update_mapping [_doc] [2020-09-27T02:00:01,481][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-flow-2020.09.27/yAy9Z-_IQhaAbhMCq9BU5w] update_mapping [_doc] [2020-09-27T02:00:01,766][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-fileinfo-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:00:01,846][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-tls-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:00:02,032][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-http-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:00:02,181][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-alert-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:00:02,555][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-flow-2020.09.27/yAy9Z-_IQhaAbhMCq9BU5w] update_mapping [_doc] [2020-09-27T02:00:02,739][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-tls-2020.09.27/xVw_m4z2RruVAEmFt6sZGQ] update_mapping [_doc] [2020-09-27T02:00:02,747][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T02:00:02,755][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:00:02,765][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:00:02,856][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T02:00:02,968][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:00:03,079][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:00:03,228][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:00:03,236][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T02:00:03,524][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-2020.09.27/wU-tIVRJRpS0vsqMThTQ3g] update_mapping [_doc] [2020-09-27T02:00:03,780][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-tls-2020.09.27/xVw_m4z2RruVAEmFt6sZGQ] update_mapping [_doc] [2020-09-27T02:00:04,781][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-tls-2020.09.27/xVw_m4z2RruVAEmFt6sZGQ] update_mapping [_doc] [2020-09-27T02:00:04,783][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T02:00:04,873][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:00:04,880][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:00:05,790][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-dhcp-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:00:05,901][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-anomaly-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:00:06,062][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-smtp-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:00:06,356][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T02:00:06,366][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:00:06,538][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dhcp-2020.09.27/v3c2212RSJmSxcWrs_jn-w] update_mapping [_doc] [2020-09-27T02:00:06,558][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-anomaly-2020.09.27/3Zdi4g4AT2WDfRadG47HVA] update_mapping [_doc] [2020-09-27T02:00:06,632][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T02:00:06,638][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:00:06,643][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:00:06,648][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-smtp-2020.09.27/JeJcc0IcRICMWQq3oQNMwQ] update_mapping [_doc] [2020-09-27T02:00:06,835][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-flow-2020.09.27/yAy9Z-_IQhaAbhMCq9BU5w] update_mapping [_doc] [2020-09-27T02:00:08,793][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T02:00:09,803][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-snmp-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:00:09,927][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:00:10,172][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-snmp-2020.09.27/z7fQ7mXJSvqYVeoF0duODw] update_mapping [_doc] [2020-09-27T02:00:10,261][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-snmp-2020.09.27/z7fQ7mXJSvqYVeoF0duODw] update_mapping [_doc] [2020-09-27T02:00:14,821][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:00:20,241][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-2020.09.27/wU-tIVRJRpS0vsqMThTQ3g] update_mapping [_doc] [2020-09-27T02:00:20,830][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:00:22,839][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:00:41,710][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [sessions2-200927] creating index, cause [auto(bulk api)], templates [sessions2_template], shards [1]/[0] [2020-09-27T02:00:45,992][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-flow-2020.09.27/yAy9Z-_IQhaAbhMCq9BU5w] update_mapping [_doc] [2020-09-27T02:01:14,439][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:02:12,521][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-snmp-2020.09.27/z7fQ7mXJSvqYVeoF0duODw] update_mapping [_doc] [2020-09-27T02:02:12,610][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T02:04:42,777][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:04:43,775][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T02:10:47,516][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:10:47,588][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T02:14:05,969][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:15:42,159][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:15:42,243][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T02:15:43,161][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T02:18:05,450][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-tftp-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:18:05,784][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-tftp-2020.09.27/unaBzackSDm9GgVJ2owAkQ] update_mapping [_doc] [2020-09-27T02:18:43,548][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-sip-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:18:43,913][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-sip-2020.09.27/T1wV1HRzTxOssSarhOh1cQ] update_mapping [_doc] [2020-09-27T02:21:50,058][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:24:05,304][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-smtp-2020.09.27/JeJcc0IcRICMWQq3oQNMwQ] update_mapping [_doc] [2020-09-27T02:27:20,807][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:27:20,904][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T02:27:44,864][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:35:33,042][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:35:45,073][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:41:16,848][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-snmp-2020.09.27/z7fQ7mXJSvqYVeoF0duODw] update_mapping [_doc] [2020-09-27T02:42:38,033][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-sip-2020.09.27/T1wV1HRzTxOssSarhOh1cQ] update_mapping [_doc] [2020-09-27T02:44:49,385][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T02:44:49,510][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-tls-2020.09.27/xVw_m4z2RruVAEmFt6sZGQ] update_mapping [_doc] [2020-09-27T02:47:58,821][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-flow-2020.09.27/yAy9Z-_IQhaAbhMCq9BU5w] update_mapping [_doc] [2020-09-27T02:48:07,842][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-anomaly-2020.09.27/3Zdi4g4AT2WDfRadG47HVA] update_mapping [_doc] [2020-09-27T02:50:33,179][INFO ][o.e.c.m.MetadataCreateIndexService] [SELKS2] [logstash-ssh-2020.09.27] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[0] [2020-09-27T02:50:33,484][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-ssh-2020.09.27/RGrYvMxyQJefzo8NLPdRAA] update_mapping [_doc] [2020-09-27T02:52:31,509][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T02:57:34,159][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-dns-2020.09.27/15rXpPC4Q42ArXPxkhPVcw] update_mapping [_doc] [2020-09-27T03:13:26,433][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T03:23:02,802][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-flow-2020.09.27/yAy9Z-_IQhaAbhMCq9BU5w] update_mapping [_doc] [2020-09-27T03:30:00,001][INFO ][o.e.x.s.SnapshotRetentionTask] [SELKS2] starting SLM retention snapshot cleanup task [2020-09-27T03:30:00,002][INFO ][o.e.x.s.SnapshotRetentionTask] [SELKS2] there are no repositories to fetch, SLM retention snapshot cleanup task complete [2020-09-27T03:52:26,854][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T04:06:49,923][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T04:06:49,927][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-tls-2020.09.27/xVw_m4z2RruVAEmFt6sZGQ] update_mapping [_doc] [2020-09-27T04:12:16,691][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T04:12:16,814][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T04:36:24,034][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-sip-2020.09.27/T1wV1HRzTxOssSarhOh1cQ] update_mapping [_doc] [2020-09-27T05:22:47,920][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T05:45:40,046][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T05:45:40,116][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T06:18:47,834][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc] [2020-09-27T06:18:47,941][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-fileinfo-2020.09.27/lSUFo6xuS2-QbBCPAwpTnw] update_mapping [_doc] [2020-09-27T06:20:05,047][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-ssh-2020.09.27/RGrYvMxyQJefzo8NLPdRAA] update_mapping [_doc] [2020-09-27T07:44:37,021][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-smtp-2020.09.27/JeJcc0IcRICMWQq3oQNMwQ] update_mapping [_doc] [2020-09-27T08:35:28,158][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T09:20:33,707][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T09:22:11,908][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-alert-2020.09.27/ZEcq-MPcRiusFFv0Q7DeYw] update_mapping [_doc] [2020-09-27T10:13:10,514][INFO ][o.e.c.m.MetadataMappingService] [SELKS2] [logstash-http-2020.09.27/RqUmP8jrQQiD1rnqsowXMQ] update_mapping [_doc]

michal25 commented 4 years ago

And logstash root@SELKS2:/var/log/logstash# tail -100 logstash-plain.log [2020-09-23T12:27:03,899][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:05,513][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:08,908][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:10,557][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:13,915][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:15,601][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:18,924][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:20,645][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:23,931][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:25,689][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:28,939][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:30,733][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:33,946][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:35,779][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:38,954][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:40,823][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:43,961][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:45,867][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:48,968][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:50,911][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:53,979][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:27:55,955][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:27:58,988][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:00,999][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:02,786][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64} [2020-09-23T12:28:02,787][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64} [2020-09-23T12:28:02,788][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64} [2020-09-23T12:28:02,791][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64} [2020-09-23T12:28:03,995][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:06,038][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:09,002][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:11,079][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:14,009][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:16,118][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:19,016][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:21,163][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:24,023][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:26,207][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:29,030][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:31,252][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:34,040][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:36,296][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:39,046][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:41,340][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:44,053][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:46,384][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:49,060][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:51,428][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:54,067][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:28:56,472][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{["LogStash::Filters::Metrics", {"add_tag"=>"metric", "id"=>"c0db4298d0ed6050a2f6b8477f5838e46c80764e15c899706176877a209062a8", "flush_interval"=>30, "meter"=>["eve_insert"]}]=>[{"thread_id"=>32, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>33, "name"=>"[main]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}, {"thread_id"=>34, "name"=>"[main]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>35, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:insleep'"}]}} [2020-09-23T12:28:59,074][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:34:55,260][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.1", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-post-Debian-1deb10u1 on 11.0.8+10-post-Debian-1deb10u1 +indy +jit [linux-x86_64]"} [2020-09-23T12:34:57,575][INFO ][org.reflections.Reflections] Reflections took 28 ms to scan 1 urls, producing 22 keys and 45 values [2020-09-23T12:34:58,322][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}} [2020-09-23T12:34:58,480][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"} [2020-09-23T12:34:58,524][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7} [2020-09-23T12:34:58,528][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7} [2020-09-23T12:34:58,588][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1"]} [2020-09-23T12:34:58,598][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}} [2020-09-23T12:34:58,607][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"} [2020-09-23T12:34:58,614][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7} [2020-09-23T12:34:58,621][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7} [2020-09-23T12:34:58,637][INFO ][logstash.outputs.elasticsearch][main] Using mapping template from {:path=>"/etc/logstash/elasticsearch7-template.json"} [2020-09-23T12:34:58,659][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1"]} [2020-09-23T12:34:58,715][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"number_of_replicas"=>0, "index.refresh_interval"=>"5s"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [2020-09-23T12:34:58,683][INFO ][logstash.outputs.elasticsearch][main] Using mapping template from {:path=>"/etc/logstash/elasticsearch7-template.json"} [2020-09-23T12:34:58,749][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"number_of_replicas"=>0, "index.refresh_interval"=>"5s"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [2020-09-23T12:34:58,755][INFO ][logstash.outputs.elasticsearch][main] Installing elasticsearch template to _template/logstash [2020-09-23T12:34:58,758][INFO ][logstash.outputs.elasticsearch][main] Installing elasticsearch template to _template/logstash [2020-09-23T12:34:58,912][INFO ][logstash.filters.geoip ][main] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"} [2020-09-23T12:34:59,029][INFO ][logstash.filters.geoip ][main] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"} [2020-09-23T12:34:59,082][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf", "/etc/logstash/conf.d/scirius-logstash.conf"], :thread=>"#"} [2020-09-23T12:35:00,021][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.94} [2020-09-23T12:35:00,195][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"} [2020-09-23T12:35:00,241][INFO ][filewatch.observingtail ][main][d4aef1d642dafd3cc0ec28e9e79530daa4bc5c58ba6b725806ceff6c4cfb1cf0] START, creating Discoverer, Watch with file and sincedb collections [2020-09-23T12:35:00,235][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2020-09-23T12:35:00,461][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2020-09-23T12:53:39,334][WARN ][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::ClientProtocolException] 127.0.0.1:9200 failed to respond {:url=>http://127.0.0.1:9200/, :error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::ClientProtocolException] 127.0.0.1:9200 failed to respond", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"} [2020-09-23T12:53:39,347][WARN ][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::ClientProtocolException] 127.0.0.1:9200 failed to respond {:url=>http://127.0.0.1:9200/, :error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::ClientProtocolException] 127.0.0.1:9200 failed to respond", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"} [2020-09-23T12:53:39,359][WARN ][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Broken pipe (Write failed) {:url=>http://127.0.0.1:9200/, :error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Broken pipe (Write failed)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"} [2020-09-23T12:53:39,373][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Broken pipe (Write failed)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2} [2020-09-23T12:53:39,374][WARN ][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Broken pipe (Write failed) {:url=>http://127.0.0.1:9200/, :error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Broken pipe (Write failed)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"} [2020-09-23T12:53:39,403][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::ClientProtocolException] 127.0.0.1:9200 failed to respond", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2} [2020-09-23T12:53:39,403][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Broken pipe (Write failed)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2} [2020-09-23T12:53:39,404][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::ClientProtocolException] 127.0.0.1:9200 failed to respond", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2} [2020-09-23T12:53:41,417][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4} [2020-09-23T12:53:41,412][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4} [2020-09-23T12:53:41,411][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4} [2020-09-23T12:53:41,409][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4} [2020-09-23T12:53:43,875][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:53:45,430][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8} [2020-09-23T12:53:45,431][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8} [2020-09-23T12:53:45,430][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8} [2020-09-23T12:53:45,431][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8} [2020-09-23T12:53:48,900][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2020-09-23T12:53:53,443][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>16} [2020-09-23T12:53:53,443][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>16} [2020-09-23T12:53:53,445][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>16} [2020-09-23T12:53:53,444][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c03be43b13] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>16} [2020-09-23T12:53:53,956][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}

michal25 commented 4 years ago

And health root@SELKS2:/var/log/logstash# selks-health-check_stamus ● suricata.service - LSB: Next Generation IDS/IPS Loaded: loaded (/etc/init.d/suricata; generated) Active: active (running) since Sun 2020-09-27 02:00:44 CEST; 8h ago Docs: man:systemd-sysv-generator(8) Process: 13131 ExecStart=/etc/init.d/suricata start (code=exited, status=0/SUCCESS) Tasks: 10 (limit: 4915) Memory: 821.1M CGroup: /system.slice/suricata.service └─13139 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid --af-packet -D -v --user=logstash

Sep 27 02:00:44 SELKS2 systemd[1]: Starting LSB: Next Generation IDS/IPS... Sep 27 02:00:44 SELKS2 suricata[13131]: Starting suricata in IDS (af-packet) mode... done. Sep 27 02:00:44 SELKS2 systemd[1]: Started LSB: Next Generation IDS/IPS. ● elasticsearch.service - Elasticsearch Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-23 12:53:48 CEST; 3 days ago Docs: https://www.elastic.co Main PID: 4392 (java) Tasks: 94 (limit: 4915) Memory: 9.4G CGroup: /system.slice/elasticsearch.service ├─4392 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch … └─4575 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

Sep 23 12:53:39 SELKS2 systemd[1]: Starting Elasticsearch... Sep 23 12:53:48 SELKS2 systemd[1]: Started Elasticsearch. ● logstash.service - logstash Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-23 12:34:21 CEST; 3 days ago Main PID: 635 (java) Tasks: 41 (limit: 4915) Memory: 1.1G CGroup: /system.slice/logstash.service └─635 /bin/java -Xms4g -Xmx4g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -…

Sep 23 12:53:45 SELKS2 logstash[635]: [2020-09-23T12:53:45,431][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c… Sep 23 12:53:45 SELKS2 logstash[635]: [2020-09-23T12:53:45,430][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c… Sep 23 12:53:45 SELKS2 logstash[635]: [2020-09-23T12:53:45,431][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c… Sep 23 12:53:45 SELKS2 logstash[635]: [2020-09-23T12:53:45,430][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c… Sep 23 12:53:48 SELKS2 logstash[635]: [2020-09-23T12:53:48,900][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, … Sep 23 12:53:53 SELKS2 logstash[635]: [2020-09-23T12:53:53,443][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c… Sep 23 12:53:53 SELKS2 logstash[635]: [2020-09-23T12:53:53,443][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c… Sep 23 12:53:53 SELKS2 logstash[635]: [2020-09-23T12:53:53,444][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c… Sep 23 12:53:53 SELKS2 logstash[635]: [2020-09-23T12:53:53,445][ERROR][logstash.outputs.elasticsearch][main][e55f734d663b7fb7ca21a05c69227f334d0c6198948f303fac6e50c… Sep 23 12:53:53 SELKS2 logstash[635]: [2020-09-23T12:53:53,956][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>…0.0.1:9200/"} Hint: Some lines were ellipsized, use -l to show in full. ● kibana.service - Kibana Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-23 12:53:48 CEST; 3 days ago Main PID: 4621 (node) Tasks: 11 (limit: 4915) Memory: 483.3M CGroup: /system.slice/kibana.service └─4621 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli

Sep 23 12:54:01 SELKS2 kibana[4621]: {"type":"log","@timestamp":"2020-09-23T10:54:01Z","tags":["warning","plugins","ingestManager"],"pid":4621,"message…kibana.yml."} Sep 23 12:54:01 SELKS2 kibana[4621]: {"type":"log","@timestamp":"2020-09-23T10:54:01Z","tags":["warning","plugins","actions","actions"],"pid":4621,"mes…kibana.yml."} Sep 23 12:54:01 SELKS2 kibana[4621]: {"type":"log","@timestamp":"2020-09-23T10:54:01Z","tags":["warning","plugins","alerts","plugins","alerting"],"pid"…kibana.yml."} Sep 23 12:54:01 SELKS2 kibana[4621]: {"type":"log","@timestamp":"2020-09-23T10:54:01Z","tags":["info","plugins","monitoring","monitoring"],"pid":4621,"…ion cluster"} Sep 23 12:54:01 SELKS2 kibana[4621]: {"type":"log","@timestamp":"2020-09-23T10:54:01Z","tags":["info","savedobjects-service"],"pid":4621,"message":"Wai…grations..."} Sep 23 12:54:01 SELKS2 kibana[4621]: {"type":"log","@timestamp":"2020-09-23T10:54:01Z","tags":["warning","plugins","reporting","config"],"pid":4621,"message":"Chrom… Sep 23 12:54:01 SELKS2 kibana[4621]: {"type":"log","@timestamp":"2020-09-23T10:54:01Z","tags":["info","savedobjects-service"],"pid":4621,"message":"Sta… migrations"} Sep 23 12:54:01 SELKS2 kibana[4621]: {"type":"log","@timestamp":"2020-09-23T10:54:01Z","tags":["info","savedobjects-service"],"pid":4621,"message":"Cre… .kibana_2."} Sep 23 12:54:01 SELKS2 kibana[4621]: {"type":"log","@timestamp":"2020-09-23T10:54:01Z","tags":["warning","savedobjects-service"],"pid":4621,"message":"Unable to con… Sep 23 12:54:01 SELKS2 kibana[4621]: {"type":"log","@timestamp":"2020-09-23T10:54:01Z","tags":["warning","savedobjects-service"],"pid":4621,"message":"Another Kiban… Hint: Some lines were ellipsized, use -l to show in full. ● evebox.service - EveBox Server Loaded: loaded (/lib/systemd/system/evebox.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-23 12:34:21 CEST; 3 days ago Main PID: 625 (evebox) Tasks: 8 (limit: 4915) Memory: 40.1M CGroup: /system.slice/evebox.service └─625 /usr/bin/evebox server

Sep 23 12:34:24 SELKS2 evebox[625]: 2020-09-23 12:34:24 (server.go:335) -- Failed to ping Elastic Search, delaying startup: : Get "http://local…ction refused Sep 23 12:34:27 SELKS2 evebox[625]: 2020-09-23 12:34:27 (server.go:335) -- Failed to ping Elastic Search, delaying startup: : Get "http://local…ction refused Sep 23 12:34:30 SELKS2 evebox[625]: 2020-09-23 12:34:30 (server.go:335) -- Failed to ping Elastic Search, delaying startup: : Get "http://local…ction refused Sep 23 12:34:33 SELKS2 evebox[625]: 2020-09-23 12:34:33 (server.go:335) -- Failed to ping Elastic Search, delaying startup: : Get "http://local…ction refused Sep 23 12:34:36 SELKS2 evebox[625]: 2020-09-23 12:34:36 (server.go:335) -- Failed to ping Elastic Search, delaying startup: : Get "http://local…ction refused Sep 23 12:34:40 SELKS2 evebox[625]: 2020-09-23 12:34:40 (server.go:338) -- Connected to Elastic Search (version: 7.9.1) Sep 23 12:34:40 SELKS2 evebox[625]: 2020-09-23 12:34:40 (elasticsearch.go:177) -- Assuming Logstash style index Sep 23 12:34:40 SELKS2 evebox[625]: 2020-09-23 12:34:40 (server.go:131) -- Session reaper started Sep 23 12:34:40 SELKS2 evebox[625]: 2020-09-23 12:34:40 (server.go:165) -- Authentication disabled. Sep 23 12:34:40 SELKS2 evebox[625]: 2020-09-23 12:34:40 (server.go:261) -- Listening on [127.0.0.1]:5636 Hint: Some lines were ellipsized, use -l to show in full. ● molochviewer-selks.service - Moloch Viewer Loaded: loaded (/etc/systemd/system/molochviewer-selks.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-23 12:53:51 CEST; 3 days ago Main PID: 4689 (sh) Tasks: 12 (limit: 4915) Memory: 48.5M CGroup: /system.slice/molochviewer-selks.service ├─4689 /bin/sh -c /data/moloch/bin/node viewer.js -c /data/moloch/etc/config.ini >> /data/moloch/logs/viewer.log 2>&1 └─4691 /data/moloch/bin/node viewer.js -c /data/moloch/etc/config.ini

Sep 23 12:53:51 SELKS2 systemd[1]: Started Moloch Viewer. ● molochpcapread-selks.service - Moloch Pcap Read Loaded: loaded (/etc/systemd/system/molochpcapread-selks.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-23 12:53:51 CEST; 3 days ago Main PID: 4684 (sh) Tasks: 6 (limit: 4915) Memory: 271.7M CGroup: /system.slice/molochpcapread-selks.service ├─4684 /bin/sh -c /data/moloch/bin/moloch-capture -c /data/moloch/etc/config.ini -m --copy --delete -R /data/nsm/ >> /data/moloch/logs/capture.log 2>&1 └─4686 /data/moloch/bin/moloch-capture -c /data/moloch/etc/config.ini -m --copy --delete -R /data/nsm/

Sep 23 12:53:51 SELKS2 systemd[1]: Started Moloch Pcap Read. scirius RUNNING pid 4640, uptime 3 days, 21:51:31 ii elasticsearch 7.9.1 amd64 Distributed RESTful search engine built for the cloud ii elasticsearch-curator 5.8.1 amd64 Have indices in Elasticsearch? This is the tool for you!\n\nLike a museum curator manages the exhibits and collections on display, \nElasticsearch Curator helps you curate, or manage your indices. ii evebox 1:0.11.1 amd64 no description given ii kibana 7.9.1 amd64 Explore and visualize your Elasticsearch data ii kibana-dashboards-stamus 2020042401 amd64 Kibana 6 dashboard templates. ii logstash 1:7.9.1-1 all An extensible logging pipeline ii moloch 2.4.0-1 amd64 Moloch Full Packet System ii scirius 3.5.0-3 amd64 Django application to manage Suricata ruleset ii suricata 1:2020091301-0stamus0 amd64 Suricata open source multi-thread IDS/IPS/NSM system. Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 7.8G 0 7.8G 0% /dev tmpfs tmpfs 1.6G 8.9M 1.6G 1% /run /dev/md0 ext4 887G 29G 814G 4% / tmpfs tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1001

michal25 commented 4 years ago

The Kibana state is the same

Kibana server is not ready yet

pevma commented 4 years ago

Can you try the following:

?

michal25 commented 4 years ago

Done

Kibana seems to be working now.

Elasticsearch is in yellow state. I cleared all data with the script selks-db-logs-cleanup_stamus

Kibana is working, but elasticsearfh has empty window now.

I will wait some hours and report the SELKS state here again.

Dne 29. 09. 20 v 23:34 Peter Manev napsal(a):

Can you try the following:

?

michal25 commented 4 years ago

Now, loooks the problem is passed to logstash x elasticsearch, but Kibana works.

Screenshot_20200930_104341 Screenshot_20200930_104427 Screenshot_20200930_104525

pevma commented 4 years ago

That seems more localized. After you have had some traffic - can you open ok all dashboards in kibana?

michal25 commented 4 years ago

It looks like some indexes problem, too. Screenshot_20200930_111936

pevma commented 4 years ago

For that par ti think you need to rerun the moloch setup - as you deleted all indexes/data.

michal25 commented 4 years ago

selks-first-time-setup_stamus selks-db-logs-cleanup_stamus

I can see first moloch records. Seems to be working.

michal25 commented 4 years ago

Kibana works, Moloch works, but Elastic search gives empty screen, when start search from Kibana

Screenshot_20201001_124156 Screenshot_20201001_124225

How to regenerate elastic search indexes?

Screenshot_20201001_124902

pevma commented 4 years ago

I think you can reset the dashboards now - that will regenerate the indexes. Or simply import the dashboards - like so - https://github.com/StamusNetworks/KTS7#how-to-use

michal25 commented 4 years ago

Done, but no effect.

pevma commented 4 years ago

Then i think you need to check inside the Kibana , management - if you have setup a default index (should be logstash-*) ?

michal25 commented 4 years ago

Is it set to logstash-* Screenshot_20201001_141117

pevma commented 4 years ago

ok , so that part is good then. If you open a Kibana dashboard SN-ALERTS - would it populate ok ?

michal25 commented 4 years ago

Yes, this dashboard looks good. Screenshot_20201001_161317

pevma commented 4 years ago

in that specific case i think you may refer here - https://github.com/StamusNetworks/SELKS/issues/255

michal25 commented 4 years ago

Well, the https://github.com/StamusNetworks/SELKS/issues/255 is not working on my case.

I tried to update the SELKS device with selks-upgrade_stamus

and repair the selks.conf file with https://github.com/StamusNetworks/SELKS/wiki/Kibana-did-not-load-properly

But the result is exactly the same as before the upgrade. Means https://github.com/StamusNetworks/SELKS/issues/251#issuecomment-702052403

What exactly I have to do in https://github.com/StamusNetworks/SELKS/issues/255#issuecomment-698755156 ?

pevma commented 4 years ago

Ok, just so i understand properly - all dashboards/visualizations open properly except these two cases. How do you reproduce that ? Can you share exactly the steps you do to get to the err page - so I can try to reproduce. Pretty please :)

michal25 commented 4 years ago

I'm afraid that only way to reproduce the problem is to update an earlier SELKS6 to newer version and keep all config files (so, no update the config files). Means, start again this task https://github.com/StamusNetworks/SELKS/issues/251 . Is it here some way to repair https://github.com/StamusNetworks/SELKS/issues/251#issuecomment-702052403 ?

I don't exactly understand what to do in the suricata config https://github.com/StamusNetworks/SELKS/issues/255#issuecomment-698755156

Maybe here is the way to solve the problem.

pevma commented 4 years ago

I updated - but can't seem to reproduce it. Which old config files do you exactly keep?

michal25 commented 4 years ago

I keeped all old config files.

Dne 05. 10. 20 v 8:40 Peter Manev napsal(a):

I updated - but cant seem to reproduce it. Which old config files do you exactly keep?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/StamusNetworks/SELKS/issues/251#issuecomment-703431517, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGUNEJJ6QJ2MLU4QIBE5YDSJFS67ANCNFSM4QLWP6DA.

-- Ing. Michal Vymazal Senior Cyber Security Architect Linux Services CEO

vymazal@linuxservices.cz www.linuxservices.cz

Home Computer

Druzstvo kyberneticke obrany Predseda predstavenstva druzstvo.kybernetickaobrana.cz

LinkedIn profile https://www.linkedin.com/in/linuxservices/

This mail can't contain any virus. I'm using only Open Source software.

michal25 commented 4 years ago

Today I updated one selks device and the result is exact the same as

https://github.com/StamusNetworks/SELKS/issues/251#issuecomment-702720837

I updated all config files too.

pevma commented 4 years ago

What is the current status - only Scirius management pages do not populate or ? Does Hunt populate?

michal25 commented 4 years ago

Hunt populate is working. Here is the problem with Discover - Elastic

Screenshot_20201015_132310 clipboard.txt

michal25 commented 4 years ago

I also copied the response to the clipboard, attached in the file clipboard.txt clipboard.txt

pevma commented 3 years ago

ok, having a look and proposing a way forward.

nicholaslaird commented 3 years ago

Might be unrelated but I also have a blank white page for Kibana. It's a brand new install from ISO. What seems to be happening is a 404 error in the web console where it can't find bootstrap.js.

Xnip2021-01-21_23-06-34
pevma commented 3 years ago

Did you go over the first time setup wiki - I am thinking might be related to - https://github.com/StamusNetworks/SELKS/wiki/First-time-setup#nginx-config

nicholaslaird commented 3 years ago

Did you go over the first time setup wiki - I am thinking might be related to - https://github.com/StamusNetworks/SELKS/wiki/First-time-setup#nginx-config

That took care of it for me. I missed that, simply assuming that it was ok as I went through everything but clicking that link. 👍 Thanks.

pevma commented 3 years ago

Glad to hear it was fixed! This will be addressed in the next release upgrade too. Thank you