StamusNetworks / SELKS

A Suricata based IDS/IPS/NSM distro
https://www.stamus-networks.com/open-source/#selks
GNU General Public License v3.0
1.45k stars 285 forks source link

No alerts activity at home dashboard #201

Closed VN1977 closed 4 years ago

VN1977 commented 4 years ago

Hello! We have SELKS 5.0 installation, it works for about 6 month. Recently I change storage to more rapid using dd and after that I have one problem. At main dashboard and suricata dashboard there is graph Alerts activity. And from the moment I moved storage graph is clean. It seems that no alerts and no problem detected but at the same time I see alerts at hunt interface and in kibana too. The problem connected only with this graph. How to resolve it?

pevma commented 4 years ago

When you changed the storage - how did that affect the installation/Elasticsearch?

VN1977 commented 4 years ago

There was no affect absolutely. I just started Elasticsearch and that's all. I change storage by dd if=/dev/sda of=/dev/sdb and then started system from new storage.

pevma commented 4 years ago

So the alerts actually appear in Kibana , and not in Scirius ?

VN1977 commented 4 years ago

Yes, exactly.

pevma commented 4 years ago

You should make sure the elasticsearch index is correctly named - below is an extract of /etc/scirius/local_settings.py from a SELKS installation for reference :


USE_ELASTICSEARCH = True
ELASTICSEARCH_ADDRESS = "localhost:9200"
ELASTICSEARCH_VERSION = 6
KIBANA_VERSION = 6
KIBANA_INDEX = ".kibana"
KIBANA_URL = "http://localhost:5601"
KIBANA6_DASHBOARDS_PATH = "/opt/selks/kibana6-dashboards/"
USE_KIBANA = True
KIBANA_PROXY = True

#SURICATA_UNIX_SOCKET = "/var/run/suricata/suricata-command.socket"

USE_EVEBOX = True
EVEBOX_ADDRESS = "localhost:5636"

USE_SURICATA_STATS = True
USE_LOGSTASH_STATS = True
STATIC_ROOT="/var/lib/scirius/static/"

DATABASES = {
  'default': {
     'ENGINE': 'django.db.backends.sqlite3',
     'NAME': os.path.join(BASE_DIR, 'db', 'db.sqlite3'),
  }
}
DBBACKUP_STORAGE_OPTIONS = {'location': '/var/backups/'}

ELASTICSEARCH_LOGSTASH_ALERT_INDEX="logstash-alert-"

SURICATA_NAME_IS_HOSTNAME = True

ALLOWED_HOSTS=["*"]
ELASTICSEARCH_KEYWORD = "keyword"
VN1977 commented 4 years ago

I have similar the same file.

> 
> 
> Django settings for scirius project.
> 
> For more information on this file, see
> https://docs.djangoproject.com/en/1.6/topics/settings/
> 
> For the full list of settings and their values, see
> https://docs.djangoproject.com/en/1.6/ref/settings/
> """
> import os
> BASE_DIR = "/var/lib/scirius/"
> GIT_SOURCES_BASE_DIRECTORY = os.path.join(BASE_DIR, 'git-sources/')
> SECRET_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
> USE_ELASTICSEARCH = True
> ELASTICSEARCH_ADDRESS = "localhost:9200"
> ELASTICSEARCH_VERSION = 6
> KIBANA_VERSION = 6
> KIBANA_INDEX = ".kibana"
> KIBANA_URL = "http://localhost:5601"
> KIBANA6_DASHBOARDS_PATH = "/opt/selks/kibana6-dashboards/"
> USE_KIBANA = True
> KIBANA_PROXY = True
> 
> #SURICATA_UNIX_SOCKET = "/var/run/suricata/suricata-command.socket"
> 
> USE_EVEBOX = True
> EVEBOX_ADDRESS = "localhost:5636"
> 
> USE_SURICATA_STATS = True
> USE_LOGSTASH_STATS = True
> STATIC_ROOT="/var/lib/scirius/static/"
> 
> DATABASES = {
>   'default': {
>      'ENGINE': 'django.db.backends.sqlite3',
>      'NAME': os.path.join(BASE_DIR, 'db', 'db.sqlite3'),
>   }
> }
> DBBACKUP_STORAGE_OPTIONS = {'location': '/var/backups/'}
> 
> ELASTICSEARCH_LOGSTASH_ALERT_INDEX="logstash-alert-"
> 
> SURICATA_NAME_IS_HOSTNAME = True
> 
> ALLOWED_HOSTS=["*"]
> ELASTICSEARCH_KEYWORD = "keyword"

And what Elasicsearch index did you mean?

VN1977 commented 4 years ago

That's what I see Screenshot

pevma commented 4 years ago

What is the hostname ?

VN1977 commented 4 years ago

hostname is ol-ms-sr-rm0143 and I did't change it

pevma commented 4 years ago

Is Suricata and the rest of the services running ? If you execute selks-health-check_stamus - would all services be running ok? (please feel free to post the output here)

VN1977 commented 4 years ago

Output of selks-health-check_stamus

● suricata.service - LSB: Next Generation IDS/IPS
   Loaded: loaded (/etc/init.d/suricata; generated; vendor preset: enabled)
   Active: active (running) since Wed 2019-10-09 02:01:25 MSK; 13h ago
     Docs: man:systemd-sysv-generator(8)
  Process: 25844 ExecStop=/etc/init.d/suricata stop (code=exited, status=0/SUCCESS)
  Process: 26026 ExecStart=/etc/init.d/suricata start (code=exited, status=0/SUCCESS)
    Tasks: 54 (limit: 4915)
   CGroup: /system.slice/suricata.service
           └─26034 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid --af-packet -D -v --user=logstash

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-10-03 09:45:00 MSK; 6 days ago
     Docs: http://www.elastic.co
 Main PID: 1627 (java)
    Tasks: 338 (limit: 4915)
   CGroup: /system.slice/elasticsearch.service
           ├─1627 /usr/bin/java -Xms32g -Xmx32g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-4820322780268184563 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=deb -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
           └─2014 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-10-03 09:44:57 MSK; 6 days ago
 Main PID: 1219 (java)
    Tasks: 84 (limit: 4915)
   CGroup: /system.slice/logstash.service
           └─1219 /usr/bin/java -Xms4g -Xmx4g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/share/logstash/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/share/logstash/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-22.0.jar:/usr/share/logstash/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.8.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.8.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.8.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.8.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/javassist-3.22.0-GA.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash --path.settings /etc/logstash

Oct 09 15:27:52 ol-ms-sr-rm0143 logstash[1219]: [2019-10-09T15:27:52,501][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [201361896][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-flow-2019.10.09][2]] containing [10] requests, target allocation id: uJTLjTJzQdG_18z0cFiK9Q, primary term: 1 on EsThreadPoolExecutor[name = jQIX_2X/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60ec186b[Running, pool size = 24, active threads = 24, queued tasks = 200, completed tasks = 193598767]]"})
Oct 09 15:27:52 ol-ms-sr-rm0143 logstash[1219]: [2019-10-09T15:27:52,501][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [201361902][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-flow-2019.10.09][0]] containing [19] requests, target allocation id: Ib-u_OvbRTeqawbyMp9fFA, primary term: 1 on EsThreadPoolExecutor[name = jQIX_2X/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60ec186b[Running, pool size = 24, active threads = 24, queued tasks = 200, completed tasks = 193598767]]"})
Oct 09 15:27:52 ol-ms-sr-rm0143 logstash[1219]: [2019-10-09T15:27:52,501][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [201361905][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-flow-2019.10.09][3]] containing [16] requests, target allocation id: KRnqDiTeShqK0NPLhu-W9Q, primary term: 1 on EsThreadPoolExecutor[name = jQIX_2X/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60ec186b[Running, pool size = 24, active threads = 24, queued tasks = 200, completed tasks = 193598767]]"})
Oct 09 15:27:52 ol-ms-sr-rm0143 logstash[1219]: [2019-10-09T15:27:52,501][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [201361896][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-flow-2019.10.09][2]] containing [10] requests, target allocation id: uJTLjTJzQdG_18z0cFiK9Q, primary term: 1 on EsThreadPoolExecutor[name = jQIX_2X/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60ec186b[Running, pool size = 24, active threads = 24, queued tasks = 200, completed tasks = 193598767]]"})
Oct 09 15:27:52 ol-ms-sr-rm0143 logstash[1219]: [2019-10-09T15:27:52,502][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [201361896][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-flow-2019.10.09][2]] containing [10] requests, target allocation id: uJTLjTJzQdG_18z0cFiK9Q, primary term: 1 on EsThreadPoolExecutor[name = jQIX_2X/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60ec186b[Running, pool size = 24, active threads = 24, queued tasks = 200, completed tasks = 193598767]]"})
Oct 09 15:27:52 ol-ms-sr-rm0143 logstash[1219]: [2019-10-09T15:27:52,502][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [201361905][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-flow-2019.10.09][3]] containing [16] requests, target allocation id: KRnqDiTeShqK0NPLhu-W9Q, primary term: 1 on EsThreadPoolExecutor[name = jQIX_2X/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60ec186b[Running, pool size = 24, active threads = 24, queued tasks = 200, completed tasks = 193598767]]"})
Oct 09 15:27:52 ol-ms-sr-rm0143 logstash[1219]: [2019-10-09T15:27:52,502][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [201361907][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-flow-2019.10.09][4]] containing [17] requests, target allocation id: EcM2f2nsQXqyv9wu8WWyOg, primary term: 1 on EsThreadPoolExecutor[name = jQIX_2X/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60ec186b[Running, pool size = 24, active threads = 24, queued tasks = 200, completed tasks = 193598767]]"})
Oct 09 15:27:52 ol-ms-sr-rm0143 logstash[1219]: [2019-10-09T15:27:52,502][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [201361896][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-flow-2019.10.09][2]] containing [10] requests, target allocation id: uJTLjTJzQdG_18z0cFiK9Q, primary term: 1 on EsThreadPoolExecutor[name = jQIX_2X/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60ec186b[Running, pool size = 24, active threads = 24, queued tasks = 200, completed tasks = 193598767]]"})
Oct 09 15:27:52 ol-ms-sr-rm0143 logstash[1219]: [2019-10-09T15:27:52,502][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [201361902][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-flow-2019.10.09][0]] containing [19] requests, target allocation id: Ib-u_OvbRTeqawbyMp9fFA, primary term: 1 on EsThreadPoolExecutor[name = jQIX_2X/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60ec186b[Running, pool size = 24, active threads = 24, queued tasks = 200, completed tasks = 193598767]]"})
Oct 09 15:27:52 ol-ms-sr-rm0143 logstash[1219]: [2019-10-09T15:27:52,502][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [201361896][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-flow-2019.10.09][2]] containing [10] requests, target allocation id: uJTLjTJzQdG_18z0cFiK9Q, primary term: 1 on EsThreadPoolExecutor[name = jQIX_2X/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60ec186b[Running, pool size = 24, active threads = 24, queued tasks = 200, completed tasks = 193598767]]"})
● kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-10-03 10:11:19 MSK; 6 days ago
 Main PID: 3493 (node)
    Tasks: 11 (limit: 4915)
   CGroup: /system.slice/kibana.service
           └─3493 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

Oct 09 15:26:29 ol-ms-sr-rm0143 kibana[3493]: {"type":"response","@timestamp":"2019-10-09T12:26:29Z","tags":[],"pid":3493,"method":"post","statusCode":200,"req":{"url":"/api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ","method":"post","headers":{"host":"localhost:5601","accept-encoding":"identity","content-length":"81","accept-language":"ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3","kbn-version":"6.8.1","accept":"application/json, text/plain, */*","user-agent":"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0","connection":"close","x-forwarded-proto":"https","referer":"https://ids01.corp.lan/app/monitoring","content-type":"application/json;charset=utf-8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://ids01.corp.lan/app/monitoring"},"res":{"statusCode":200,"responseTime":21,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ 200 21ms - 9.0B"}
Oct 09 15:26:39 ol-ms-sr-rm0143 kibana[3493]: {"type":"response","@timestamp":"2019-10-09T12:26:39Z","tags":[],"pid":3493,"method":"post","statusCode":200,"req":{"url":"/api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ","method":"post","headers":{"host":"localhost:5601","accept-encoding":"identity","content-length":"81","accept-language":"ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3","kbn-version":"6.8.1","accept":"application/json, text/plain, */*","user-agent":"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0","connection":"close","x-forwarded-proto":"https","referer":"https://ids01.corp.lan/app/monitoring","content-type":"application/json;charset=utf-8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://ids01.corp.lan/app/monitoring"},"res":{"statusCode":200,"responseTime":24,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ 200 24ms - 9.0B"}
Oct 09 15:26:49 ol-ms-sr-rm0143 kibana[3493]: {"type":"response","@timestamp":"2019-10-09T12:26:49Z","tags":[],"pid":3493,"method":"post","statusCode":200,"req":{"url":"/api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ","method":"post","headers":{"host":"localhost:5601","accept-encoding":"identity","content-length":"81","accept-language":"ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3","kbn-version":"6.8.1","accept":"application/json, text/plain, */*","user-agent":"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0","connection":"close","x-forwarded-proto":"https","referer":"https://ids01.corp.lan/app/monitoring","content-type":"application/json;charset=utf-8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://ids01.corp.lan/app/monitoring"},"res":{"statusCode":200,"responseTime":26,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ 200 26ms - 9.0B"}
Oct 09 15:26:59 ol-ms-sr-rm0143 kibana[3493]: {"type":"response","@timestamp":"2019-10-09T12:26:59Z","tags":[],"pid":3493,"method":"post","statusCode":200,"req":{"url":"/api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ","method":"post","headers":{"host":"localhost:5601","accept-encoding":"identity","content-length":"81","accept-language":"ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3","kbn-version":"6.8.1","accept":"application/json, text/plain, */*","user-agent":"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0","connection":"close","x-forwarded-proto":"https","referer":"https://ids01.corp.lan/app/monitoring","content-type":"application/json;charset=utf-8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://ids01.corp.lan/app/monitoring"},"res":{"statusCode":200,"responseTime":24,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ 200 24ms - 9.0B"}
Oct 09 15:27:09 ol-ms-sr-rm0143 kibana[3493]: {"type":"response","@timestamp":"2019-10-09T12:27:09Z","tags":[],"pid":3493,"method":"post","statusCode":200,"req":{"url":"/api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ","method":"post","headers":{"host":"localhost:5601","accept-encoding":"identity","content-length":"81","accept-language":"ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3","kbn-version":"6.8.1","accept":"application/json, text/plain, */*","user-agent":"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0","connection":"close","x-forwarded-proto":"https","referer":"https://ids01.corp.lan/app/monitoring","content-type":"application/json;charset=utf-8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://ids01.corp.lan/app/monitoring"},"res":{"statusCode":200,"responseTime":39,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ 200 39ms - 9.0B"}
Oct 09 15:27:19 ol-ms-sr-rm0143 kibana[3493]: {"type":"response","@timestamp":"2019-10-09T12:27:19Z","tags":[],"pid":3493,"method":"post","statusCode":200,"req":{"url":"/api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ","method":"post","headers":{"host":"localhost:5601","accept-encoding":"identity","content-length":"81","accept-language":"ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3","kbn-version":"6.8.1","accept":"application/json, text/plain, */*","user-agent":"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0","connection":"close","x-forwarded-proto":"https","referer":"https://ids01.corp.lan/app/monitoring","content-type":"application/json;charset=utf-8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://ids01.corp.lan/app/monitoring"},"res":{"statusCode":200,"responseTime":22,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ 200 22ms - 9.0B"}
Oct 09 15:27:29 ol-ms-sr-rm0143 kibana[3493]: {"type":"response","@timestamp":"2019-10-09T12:27:29Z","tags":[],"pid":3493,"method":"post","statusCode":200,"req":{"url":"/api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ","method":"post","headers":{"host":"localhost:5601","accept-encoding":"identity","content-length":"81","accept-language":"ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3","kbn-version":"6.8.1","accept":"application/json, text/plain, */*","user-agent":"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0","connection":"close","x-forwarded-proto":"https","referer":"https://ids01.corp.lan/app/monitoring","content-type":"application/json;charset=utf-8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://ids01.corp.lan/app/monitoring"},"res":{"statusCode":200,"responseTime":44,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ 200 44ms - 9.0B"}
Oct 09 15:27:39 ol-ms-sr-rm0143 kibana[3493]: {"type":"response","@timestamp":"2019-10-09T12:27:39Z","tags":[],"pid":3493,"method":"post","statusCode":200,"req":{"url":"/api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ","method":"post","headers":{"host":"localhost:5601","accept-encoding":"identity","content-length":"81","accept-language":"ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3","kbn-version":"6.8.1","accept":"application/json, text/plain, */*","user-agent":"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0","connection":"close","x-forwarded-proto":"https","referer":"https://ids01.corp.lan/app/monitoring","content-type":"application/json;charset=utf-8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://ids01.corp.lan/app/monitoring"},"res":{"statusCode":200,"responseTime":23,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ 200 23ms - 9.0B"}
Oct 09 15:27:49 ol-ms-sr-rm0143 kibana[3493]: {"type":"response","@timestamp":"2019-10-09T12:27:49Z","tags":[],"pid":3493,"method":"post","statusCode":200,"req":{"url":"/api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ","method":"post","headers":{"host":"localhost:5601","accept-encoding":"identity","content-length":"81","accept-language":"ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3","kbn-version":"6.8.1","accept":"application/json, text/plain, */*","user-agent":"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0","connection":"close","x-forwarded-proto":"https","referer":"https://ids01.corp.lan/app/monitoring","content-type":"application/json;charset=utf-8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://ids01.corp.lan/app/monitoring"},"res":{"statusCode":200,"responseTime":22,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ 200 22ms - 9.0B"}
Oct 09 15:27:59 ol-ms-sr-rm0143 kibana[3493]: {"type":"response","@timestamp":"2019-10-09T12:27:59Z","tags":[],"pid":3493,"method":"post","statusCode":200,"req":{"url":"/api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ","method":"post","headers":{"host":"localhost:5601","accept-encoding":"identity","content-length":"81","accept-language":"ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3","kbn-version":"6.8.1","accept":"application/json, text/plain, */*","user-agent":"Mozilla/5.0 (Windows NT 6.3; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0","connection":"close","x-forwarded-proto":"https","referer":"https://ids01.corp.lan/app/monitoring","content-type":"application/json;charset=utf-8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://ids01.corp.lan/app/monitoring"},"res":{"statusCode":200,"responseTime":24,"contentLength":9},"message":"POST /api/monitoring/v1/clusters/VAB3cjrzTbCgcW6O49WWzQ 200 24ms - 9.0B"}
● evebox.service - EveBox Server
   Loaded: loaded (/lib/systemd/system/evebox.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-10-03 09:44:58 MSK; 6 days ago
 Main PID: 1258 (evebox)
    Tasks: 52 (limit: 4915)
   CGroup: /system.slice/evebox.service
           └─1258 /usr/bin/evebox server

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
● molochviewer-selks.service - Moloch Viewer
   Loaded: loaded (/etc/systemd/system/molochviewer-selks.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-10-03 09:46:36 MSK; 6 days ago
 Main PID: 2360 (sh)
    Tasks: 11 (limit: 4915)
   CGroup: /system.slice/molochviewer-selks.service
           ├─2360 /bin/sh -c /data/moloch/bin/node viewer.js -c /data/moloch/etc/config.ini >> /data/moloch/logs/viewer.log 2>&1
           └─2361 /data/moloch/bin/node viewer.js -c /data/moloch/etc/config.ini

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
● molochpcapread-selks.service - Moloch Pcap Read
   Loaded: loaded (/etc/systemd/system/molochpcapread-selks.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2019-10-03 09:46:28 MSK; 6 days ago
 Main PID: 2300 (sh)
    Tasks: 6 (limit: 4915)
   CGroup: /system.slice/molochpcapread-selks.service
           ├─2300 /bin/sh -c /data/moloch/bin/moloch-capture -c /data/moloch/etc/config.ini -m --copy --delete -R /data/nsm/  >> /data/moloch/logs/capture.log 2>&1
           └─2301 /data/moloch/bin/moloch-capture -c /data/moloch/etc/config.ini -m --copy --delete -R /data/nsm/

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
scirius                          RUNNING   pid 30725, uptime 2 days, 3:43:34
ii  elasticsearch                      6.8.1                          all          Elasticsearch is a distributed RESTful search engine built for the cloud. Reference documentation can be found at https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html and the 'Elasticsearch: The Definitive Guide' book can be found at https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html
ii  elasticsearch-curator              5.7.6                          amd64        Have indices in Elasticsearch? This is the tool for you!\n\nLike a museum curator manages the exhibits and collections on display, \nElasticsearch Curator helps you curate, or manage your indices.
ii  evebox                             1:0.10.2                       amd64        no description given
ii  kibana                             6.8.1                          amd64        Explore and visualize your Elasticsearch data
ii  kibana-dashboards-stamus           2019030501                     amd64        Kibana 6 dashboard templates.
ii  logstash                           1:6.8.1-1                      all          An extensible logging pipeline
ii  moloch                             1.8.0-1                        amd64        Moloch Full Packet System
ii  scirius                            3.2.0-1                        amd64        Django application to manage Suricata ruleset
ii  suricata                           2019060301-0stamus0            amd64        Suricata open source multi-thread IDS/IPS/NSM system.
Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs   32G     0   32G   0% /dev
tmpfs          tmpfs     6.3G  685M  5.7G  11% /run
/dev/sda2      ext4       32G  874M   30G   3% /
/dev/sda8      ext4      9.2G  2.8G  5.9G  32% /usr
tmpfs          tmpfs      32G     0   32G   0% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs      32G     0   32G   0% /sys/fs/cgroup
/dev/sda9      ext4      9.2G   50M  8.6G   1% /tmp
/dev/sda5      ext4       11G   63M  9.5G   1% /boot
/dev/sda10     ext4      7.1T  3.4T  3.5T  50% /var
/dev/sda7      ext4      465M  2.3M  434M   1% /var/tmp
tmpfs          tmpfs     6.3G     0  6.3G   0% /run/user/1196001270

selks-health-check_stamus.txt

VN1977 commented 4 years ago

Maybe it helps... From the day I migrate storage I see that docs in elastic have different value of field "host". See screenshot. Output hostname and hostname -f differs. Before migration that was short name and after transformed to FQDN. I have another installation of SELKS and there is the same situation with hostname and hostname -f. But all docs there have short name in field host. So how and what can I correct in config files to solve the issue? Снимок1 Снимок2 Снимок3

pevma commented 4 years ago

can you try changing

SURICATA_NAME_IS_HOSTNAME = False

restart all services and see if any issue ?

VN1977 commented 4 years ago

I did it and restarted hole server. After these operations I don't see any graphs at all. Снимок1 Снимок2

pevma commented 4 years ago

And when you set it to True - which graphs do work?

VN1977 commented 4 years ago

I have set SURICATA_NAME_IS_HOSTNAME to True. And I don't see columns in graph from the storage migration.

VN1977 commented 4 years ago

изображение

VN1977 commented 4 years ago

I'm sure that problem connected with field host. On one installation I have FQDN and another one has short name. изображение изображение How to correct problem on first installation? Who or what sends such data to Elastic?

pevma commented 4 years ago

Do you have multiple hosts sending ? it seem you have two different ones - o1-ms-sr-rm0143 then the other one finishes on rm0120 - is that expected?

VN1977 commented 4 years ago

I have two installations. Both were installed from your ISO. The name of first one ends with 0143, and the second one with 0120. I published previous screenshots just to show that by default after installation hostname is short. But my problem host has full name in elasticsearch. And something writes this info to indexes. I explained this twice before, don't you understand the problem? Don't you see the difference? Or maybe that doesn't matter and the field host can have short or full name?

pevma commented 4 years ago

Nope, sorry i missed that. And now probably misunderstanding more...

So the problematic host is ....corp.lan right? So if that is the problematic host and the only thing you have changed is https://github.com/StamusNetworks/SELKS/issues/201#issuecomment-538388265 how did the hostname change to getting ...corp.lan added to it?

VN1977 commented 4 years ago

Problematic host is ol-ms-sr-rm0143. How changed field host in Elastic I don't know so ask for help. I didn't change OS hostname. Which component sends data to index? Suricata through logstash or not? I didn't see hostname info in eve.json.

pevma commented 4 years ago

Logstash reads (eve.json) and then ships/inserts to ES

VN1977 commented 4 years ago

But there is no info about hostname in eve.json. What time does hostname appear in logstash?

С уважением, Бунин Владимир Peter Manev notifications@github.com 16 октября 2019 г. 23:57:20 написал:

Logstash reads (eve.json) and then ships/inserts to ES — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

VN1977 commented 4 years ago

I found out exactly that if I set field value to ol-ms-sr-rm0143 the columns at graphs appear. So I need to understand why full name is sent to Elastic.

VN1977 commented 4 years ago

I solved my problem by adding mutate filter and replacing FQDN to short nostname. I don't know why but FQDN is also at @metadata.

pevma commented 4 years ago

I am also not sure why just upgrading the disk would result in a diff hostname. Thank you for posting your solution here!