Closed slayerjk closed 2 years ago
Hi @slayerjk
I see in your comment you have indices of Wazuh alerts for the recent days.
Could you be more precise about what is your problem? The title of the issue is mentioning to the File integrity monitoring that is a module of Wazuh, but in your comment, you talk about server events and no agent info. What do you mean by agent info? Are they alerts of Wazuh?
A question about your environment:
opendistroforelasticsearch-1.13.2-1.x86_64
and elasticsearch-oss-7.10.2-1.x86_64
packages? How do you install the Elasticsearch component? Did you install Elasticsearch OSS and installed some standalone plugins of Open Distro for Elasticsearch or did you install the package offered by Open Distro for Elasticsearch that contains Elasticsearch OSS + Open Distro for Elasticsearch plugin? I want to be sure you have only one Elasticsearch.Checks you could do:
Manager:
2.1. There are alerts in the /var/ossec/logs/alerts/alerts.json
file for the agent
2.2. Filebeat is installed in the manager and configured to work with Wazuh
2.3. Filebeat is running:
# depending on your service manager
systemctl status filebeat
# or
service filebeat status
2.4. Filebeat is connected correctly to Elasticsearch:
filebeat test output
2.5. Review if there are errors or warnings in Filebeat:
cat /var/log/filebeat/filebeat | grep -i -E "err|warn"
cat /var/log/elasticsearch/<CLUSTER_NAME>.log | grep -i -E "err|warn"
replace <CLUSTER_NAME>
by the name of your Elasticsearch cluster. By default, it could be elasticseach
.
Discover
plugin of Kibana using the index pattern of Wazuh alerts and check if there are alerts related to the agent you would expect to see in the Wazuh app for Kibana?Hi, Desvelao.
Thanks for the answer, not ready to reply, I'll give more details at Monday.
Hi, Desvelao.
Could you be more precise about what is your problem? The title of the issue is mentioning to the File integrity monitoring that is a module of Wazuh, but in your comment, you talk about server events and no agent info. What do you mean by agent info? Are they alerts of Wazuh?
I mean that I has no alerts from my agents, neither assigned to master, nor to worker. On kibana's UI I have neither Dashboard data, nor Events data of any of my agents.
Did you installed opendistroforelasticsearch-1.13.2-1.x86_64 and elasticsearch-oss-7.10.2-1.x86_64 packages? How do you install the Elasticsearch component? Did you install Elasticsearch OSS and installed some standalone plugins of Open Distro for Elasticsearch or did you install the package offered by Open Distro for Elasticsearch that contains Elasticsearch OSS + Open Distro for Elasticsearch plugin? I want to be sure you have only one Elasticsearch.
I've installed ES using this articles(Open Distro for Elasticsearch, Wazuh, Kibana):
Agent: is connected to a manager (master or worker/s)
/var/ossec/bin/manage_agents -l #
Available agents: ID: 001, Name: linux-test-1, IP: x.x.x.1 ID: 002, Name: PC1, IP: x.x.x.2 ID: 003, Name: solaris-test-1, IP: x.x.x.3
There are alerts in the /var/ossec/logs/alerts/alerts.json file for the agent
Yes, there are alerts from all agents.
Filebeat is installed in the manager and configured to work with Wazuh
You can see in first post, Filebeat output is OK, both on master and worker.
Filebeat is running
It's running both on master and worker:
systemctl status filebeat ● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch. Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2022-02-14 09:22:25 +06; 31min ago
filebeat test output
You can see in first post, Filebeat output is OK, both on master and worker.
Review if there are errors or warnings in Filebeat:
No errors/warning.
Review if there are errors or warnings in Elasticsearch:
[2022-02-14T09:22:39,373][WARN ][c.a.o.s.c.Salt ] [wes-1] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes
[2022-02-14T09:22:40,687][WARN ][o.e.g.DanglingIndicesState] [wes-1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2022-02-14T09:22:43,401][WARN ][c.a.o.s.a.r.AuditMessageRouter] [wes-1] No endpoint configured for categories [BAD_HEADERS, FAILED_LOGIN, MISSING_PRIVILEGES, GRANTED_PRIVILEGES, OPENDISTRO_SECURITY_INDEX_ATTEMPT, SSL_EXCEPTION, AUTHENTICATED, INDEX_EVENT, COMPLIANCE_DOC_READ, COMPLIANCE_DOC_WRITE, COMPLIANCE_EXTERNAL_CONFIG, COMPLIANCE_INTERNAL_CONFIG_READ, COMPLIANCE_INTERNAL_CONFIG_WRITE], using default endpoint
[2022-02-14T09:23:52,399][WARN ][r.suppressed ] [wes-1] path: /wazuh-alerts-*/_search, params: {ignore_unavailable=true, preference=1644808482801, index=wazuh-alerts-*, timeout=30000ms, track_total_hits=true}
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) [elasticsearch-7.10.2.jar:7.10.2]
at org.elasticsearch.transport.TransportChannel.sendErrorResponse(TransportChannel.java:56) [elasticsearch-7.10.2.jar:7.10.2]
[2022-02-14T09:23:52,409][WARN ][r.suppressed ] [wes-1] path: /wazuh-alerts-*/_search, params: {ignore_unavailable=true, preference=1644808482801, index=wazuh-alerts-*, timeout=30000ms, track_total_hits=true}
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) [elasticsearch-7.10.2.jar:7.10.2]
at org.elasticsearch.transport.TransportChannel.sendErrorResponse(TransportChannel.java:56) [elasticsearch-7.10.2.jar:7.10.2]
[2022-02-14T09:23:56,017][WARN ][r.suppressed ] [wes-1] path: /wazuh-alerts-*/_search, params: {ignore_unavailable=true, preference=1644808482801, index=wazuh-alerts-*, timeout=30000ms, track_total_hits=true}
at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:109) ~[elasticsearch-7.10.2.jar:7.10.2]
[2022-02-14T09:28:55,976][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [wes-1] Exception during establishing a SSL connection: java.net.SocketException: Connection reset
[2022-02-14T09:28:55,997][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [wes-1] Exception during establishing a SSL connection: java.net.SocketException: Connection reset
[root@wazuh-test-es-1 ]# cat /var/log/elasticsearch/elasticsearch.log | grep -i -E "err|warn"
[2022-02-14T09:20:47,543][INFO ][o.e.n.Node ] [wes-1] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms2g, -Xmx2g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-11580827226973631716, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///usr/share/elasticsearch/plugins/opendistro-performance-analyzer/pa_config/es_security.policy, -Dlog4j2.formatMsgNoLookups=true, -XX:MaxDirectMemorySize=1073741824, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=oss, -Des.distribution.type=rpm, -Des.bundled_jdk=true]
[2022-02-14T09:20:51,563][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/jvm.options.d/disabledlog4j.options has insecure file permissions (should be 0600)
[2022-02-14T09:20:51,564][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/.elasticsearch.keystore.initial_md5sum has insecure file permissions (should be 0600)
[2022-02-14T09:20:51,564][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] Directory /etc/elasticsearch/certs has insecure file permissions (should be 0700)
[2022-02-14T09:20:51,564][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/admin-key.pem has insecure file permissions (should be 0600)
[2022-02-14T09:20:51,565][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/admin.pem has insecure file permissions (should be 0600)
[2022-02-14T09:20:51,565][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/root-ca.key has insecure file permissions (should be 0600)
[2022-02-14T09:20:51,565][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/root-ca.pem has insecure file permissions (should be 0600)
[2022-02-14T09:20:51,565][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/elasticsearch.pem has insecure file permissions (should be 0600)
[2022-02-14T09:20:51,566][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/elasticsearch-key.pem has insecure file permissions (should be 0600)
[2022-02-14T09:20:52,912][WARN ][c.a.o.r.s.PluginSettings ] [wes-1] reports:Failed to load /etc/elasticsearch/opendistro-reports-scheduler/reports-scheduler.yml
[2022-02-14T09:20:56,986][WARN ][c.a.o.s.c.Salt ] [wes-1] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes
[2022-02-14T09:20:58,328][WARN ][o.e.g.DanglingIndicesState] [wes-1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2022-02-14T09:21:01,050][WARN ][c.a.o.s.a.r.AuditMessageRouter] [wes-1] No endpoint configured for categories [BAD_HEADERS, FAILED_LOGIN, MISSING_PRIVILEGES, GRANTED_PRIVILEGES, OPENDISTRO_SECURITY_INDEX_ATTEMPT, SSL_EXCEPTION, AUTHENTICATED, INDEX_EVENT, COMPLIANCE_DOC_READ, COMPLIANCE_DOC_WRITE, COMPLIANCE_EXTERNAL_CONFIG, COMPLIANCE_INTERNAL_CONFIG_READ, COMPLIANCE_INTERNAL_CONFIG_WRITE], using default endpoint
[2022-02-14T09:22:30,365][INFO ][o.e.n.Node ] [wes-1] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms2g, -Xmx2g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-617688534097200974, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///usr/share/elasticsearch/plugins/opendistro-performance-analyzer/pa_config/es_security.policy, -Dlog4j2.formatMsgNoLookups=true, -XX:MaxDirectMemorySize=1073741824, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=oss, -Des.distribution.type=rpm, -Des.bundled_jdk=true]
[2022-02-14T09:22:34,003][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/jvm.options.d/disabledlog4j.options has insecure file permissions (should be 0600)
[2022-02-14T09:22:34,003][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/.elasticsearch.keystore.initial_md5sum has insecure file permissions (should be 0600)
[2022-02-14T09:22:34,003][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] Directory /etc/elasticsearch/certs has insecure file permissions (should be 0700)
[2022-02-14T09:22:34,004][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/admin-key.pem has insecure file permissions (should be 0600)
[2022-02-14T09:22:34,004][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/admin.pem has insecure file permissions (should be 0600)
[2022-02-14T09:22:34,004][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/root-ca.key has insecure file permissions (should be 0600)
[2022-02-14T09:22:34,005][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/root-ca.pem has insecure file permissions (should be 0600)
[2022-02-14T09:22:34,005][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/elasticsearch.pem has insecure file permissions (should be 0600)
[2022-02-14T09:22:34,005][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [wes-1] File /etc/elasticsearch/certs/elasticsearch-key.pem has insecure file permissions (should be 0600)
[2022-02-14T09:22:35,068][WARN ][c.a.o.r.s.PluginSettings ] [wes-1] reports:Failed to load /etc/elasticsearch/opendistro-reports-scheduler/reports-scheduler.yml
[2022-02-14T09:22:39,373][WARN ][c.a.o.s.c.Salt ] [wes-1] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes
[2022-02-14T09:22:40,687][WARN ][o.e.g.DanglingIndicesState] [wes-1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2022-02-14T09:22:43,401][WARN ][c.a.o.s.a.r.AuditMessageRouter] [wes-1] No endpoint configured for categories [BAD_HEADERS, FAILED_LOGIN, MISSING_PRIVILEGES, GRANTED_PRIVILEGES, OPENDISTRO_SECURITY_INDEX_ATTEMPT, SSL_EXCEPTION, AUTHENTICATED, INDEX_EVENT, COMPLIANCE_DOC_READ, COMPLIANCE_DOC_WRITE, COMPLIANCE_EXTERNAL_CONFIG, COMPLIANCE_INTERNAL_CONFIG_READ, COMPLIANCE_INTERNAL_CONFIG_WRITE], using default endpoint
[2022-02-14T09:23:52,399][WARN ][r.suppressed ] [wes-1] path: /wazuh-alerts-*/_search, params: {ignore_unavailable=true, preference=1644808482801, index=wazuh-alerts-*, timeout=30000ms, track_total_hits=true}
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) [elasticsearch-7.10.2.jar:7.10.2]
at org.elasticsearch.transport.TransportChannel.sendErrorResponse(TransportChannel.java:56) [elasticsearch-7.10.2.jar:7.10.2]
[2022-02-14T09:23:52,409][WARN ][r.suppressed ] [wes-1] path: /wazuh-alerts-*/_search, params: {ignore_unavailable=true, preference=1644808482801, index=wazuh-alerts-*, timeout=30000ms, track_total_hits=true}
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) [elasticsearch-7.10.2.jar:7.10.2]
at org.elasticsearch.transport.TransportChannel.sendErrorResponse(TransportChannel.java:56) [elasticsearch-7.10.2.jar:7.10.2]
[2022-02-14T09:23:56,017][WARN ][r.suppressed ] [wes-1] path: /wazuh-alerts-*/_search, params: {ignore_unavailable=true, preference=1644808482801, index=wazuh-alerts-*, timeout=30000ms, track_total_hits=true}
at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:109) ~[elasticsearch-7.10.2.jar:7.10.2]
[2022-02-14T09:28:55,976][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [wes-1] Exception during establishing a SSL connection: java.net.SocketException: Connection reset
[2022-02-14T09:28:55,997][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [wes-1] Exception during establishing a SSL connection: java.net.SocketException: Connection reset
Could you go to the Discover plugin of Kibana using the index pattern of Wazuh alerts and check if there are alerts related to the agent you would expect to see in the Wazuh app for Kibana?
And I've also turned off firewalld on all cluster hosts(wazuh and es), to make sure its not a troublemaker.
Could you share the Filebeat configuration for both Wazuh managers?
cat /etc/filebeat/fielebeat.yml
It is strange to me that you have 2 indices related to Wazuh alerts and you don't have errors in Filebeat and Elasticsearch that explain why the data is not indexed in Elasticsearch.
green open wazuh-alerts-4.x-2022.02.09 xxxx 3 0 37085 0 9.1mb 9.1mb
green open wazuh-alerts-4.x-2022.02.08 xxx 3 0 253 0 716.1kb 716.1kb
Filebeat W-Manager:
# Wazuh - Filebeat configuration file
output.elasticsearch:
hosts: ["x.x.x.x:9200"]
protocol: https
username: "admin"
password: "xxx"
ssl.certificate_authorities:
- /etc/filebeat/certs/root-ca.pem
ssl.certificate: "/etc/filebeat/certs/filebeat.pem"
ssl.key: "/etc/filebeat/certs/filebeat-key.pem"
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.ilm.overwrite: true
setup.ilm.enabled: false
filebeat.modules:
- module: wazuh
alerts:
enabled: true
archives:
enabled: false
Filebeat conf of Worker:
# Wazuh - Filebeat configuration file
output.elasticsearch:
hosts: ["x.x.x.x:9200"]
protocol: https
username: "admin"
password: "xxx"
ssl.certificate_authorities:
- /etc/filebeat/certs/root-ca.pem
ssl.certificate: "/etc/filebeat/certs/filebeat.pem"
ssl.key: "/etc/filebeat/certs/filebeat-key.pem"
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.ilm.overwrite: true
setup.ilm.enabled: false
filebeat.modules:
- module: wazuh
alerts:
enabled: true
archives:
enabled: false
Hi, the Filebeat configuration looks good to me. The alerts
set is enabled for the wazuh
module in Filebeat.
Check the indices related to Wazuh you have in Elasticsearch:
curl -k -u <ELASTICSEARCH_USER_WITH_PRIVILEGIES>:<USER_PASSWORD> https://<ELASTICSEARCH_ADDRESS>:9200/_cat/indices/wazuh*
Share the Filebeat logs, you shared the output of Filebeat connection, but not the logs filtering by errors/warnings:
cat /var/log/filebeat/filebeat | grep -i -E "err|warn"
Could you check you have alerts generated in each worker for today or recent days?
tail -n5 /var/ossec/logs/alerts/alerts.json
Check the indices related to Wazuh you have in Elasticsearch: curl -k -u
: https:// :9200/_cat/indices/wazuh*
curl -k -u admin:xxx https://x.x.x.x:9200/_cat/indices/wazuh*
green open wazuh-monitoring-2022.6w sXzYJisHQeGUvenr2Gs6gg 1 0 1078 0 317.2kb 317.2kb
green open wazuh-alerts-4.x-2022.02.09 zNZeGFHQR-uKhvc2aZosrQ 3 0 37085 0 9.1mb 9.1mb
green open wazuh-monitoring-2022.7w e4LZu26RTWSWlsDyArefww 1 0 213 0 204.5kb 204.5kb
green open wazuh-statistics-2022.6w hCi2oJ1LSBa2uo3qteSxEQ 2 0 5244 0 1.4mb 1.4mb
green open wazuh-statistics-2022.7w jXdcMSWyTgO5VuPkHxFzlA 2 0 820 0 515.9kb 515.9kb
green open wazuh-alerts-4.x-2022.02.08 WPJ7jKBcSzi458AXaHg75A 3 0 253 0 716.1kb 716.1kb
Share the Filebeat logs, you shared the output of Filebeat connection, but not the logs filtering by errors/warnings: cat /var/log/filebeat/filebeat | grep -i -E "err|warn"
There are no (err/warn) events on both manager and worker. And no logs for the time I've run syscheck test to catch alerts(below).
Could you check you have alerts generated in each worker for today or recent days? tail -n5 /var/ossec/logs/alerts/alerts.json
Definetly have lots of alerts for today. I'm testing example syscheck all day. I'd like not to expose my ip's, sorry. But, for example, on my manager node(found on alert.log, not .json):
- Date: Mon Feb 14 17:17:43 2022
- User: xxx (S-1-5-21-3351178476-418281746-1783964060-153435)
- MD5: 505f14f30a092316f6c37da0e914da85
- SHA1: 8c6038f6c599c1eb183606b36e666c7fa739e6c5
- SHA256: 3bf975d14ff9584609f888ae7983bf6f23999d806657d32561776585ae862ced
- File attributes: ARCHIVE
What changed:
---
> # Test comment5
And worker(also alerts.log not .json):
- Date: Mon Feb 14 17:21:37 2022
- Inode: 8833654
- User: root (0)
- Group: root (0)
- MD5: f36fe4fe6822cb773a3e73592a761b6f
- SHA1: 078a5d5c9e30e665c057ce8c1776f11068b3e852
- SHA256: cd823e19efcfe8ea3a3a87ee48f6d81d1a79be8206d77ffc75f2e6801c4f0f6c
What changed:
16a17
> #test-11
alerts.json also have messages like: ""Listened ports status (netstat) changed (new port opened or closed)." and ""CIS Benchmark for Red Hat Enterprise Linux 8..."
And no alerts in Kibana:
Agent connected:
on my manager node(found on alert.log, not .json)
The set alerts
of wazuh
module for Filebeat reads the alerts that the manager generates in the alerts.json
(/var/ossec/logs/alerts/alerts.json
) file and indexes them to Elasticseach.
If you don't have the alerts.json
file in the Wazuh managers, Filebeat can't index the Wazuh alerts, because this file is expected to exist.
Check the configuration for both Wazuh managers if the jsonout_output
setting is yes
, more information in https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/global.html#jsonout-output. This setting does the generated alerts are logged to the alerts.json
file. Maybe you disabled this feature. If you change the Wazuh manager configuration by modifying the setting, you have to restart the manager.
Restart the Wazuh manager:
# depending on your service manager
systemctl restart wazuh-manager
# or
service wazuh-manager restart
If you don't have the alerts.json file in the Wazuh managers, Filebeat can't index the Wazuh alerts, because this file is expected to exist.
I have this file, I've wanted to say that there is no syscheck alerts in them.
Check the configuration for both Wazuh managers if the jsonout_output setting is yes
It's "no" now for both servers. I'll set it to "yes".
I can tell you result tomorrow.
Thanks for answering.
Hello.
So, I've toggled jsonout_output to yes, both on manager and worker.
Then restarted wazuh-manager service, both servers.
Then tryied to get syscheck event on my linux host agent(add comment to /etc/mongod.conf) - agent communicate with wazuh-worker - at 9:00.
Then I've checked alert.json on worker(looks I got the event):
{"timestamp":"2022-02-15T09:02:21.569+0600","rule":{"level":8,"description":"Integrity checksum changed.","id":"550","mitre":{"id":["T1492"],"tactic":["Impact"],"technique":["Stored Data Manipulation"]},"firedtimes":1,"mail":true,"groups":["fim"," agentsyscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"001","name":"linux-test-1","ip":"x.x.x.x"},"manager":{"name":"wazuh-test-w-1"},"id":"1644894141.15672","cluster":{"name":"wazuh-test-cluster","node":"worker-node-1"},"full_log":"File '/etc/mongod.conf' modified\nMode: realtime\nChanged attributes: size,mtime,inode,md5,sha1,sha256\nSize changed from '968' to '977'\nOld modification time was: '1644837697', now it is '1644894141'\nOld inode was: '8833654', now it is '8816374'\nOld md5sum was: 'f36fe4fe6822cb773a3e73592a761b6f'\nNew md5sum is : '0b8927c0cbb4165911bf589944646e2e'\nOld sha1sum was: '078a5d5c9e30e665c057ce8c1776f11068b3e852'\nNew sha1sum is : 'fcffabdacad4262200bd65055ae93a48c88ddd40'\nOld sha256sum was: 'cd823e19efcfe8ea3a3a87ee48f6d81d1a79be8206d77ffc75f2e6801c4f0f6c'\nNew sha256sum is : '030cb1e4c329b0102c8ee29565fc053e9fc311dbb717757f99fde98d6fbb96de'\n","syscheck":{"path":"/etc/mongod.conf","mode":"realtime","size_before":"968","size_after":"977","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_before":"f36fe4fe6822cb773a3e73592a761b6f","md5_after":"0b8927c0cbb4165911bf589944646e2e","sha1_before":"078a5d5c9e30e665c057ce8c1776f11068b3e852","sha1_after":"fcffabdacad4262200bd65055ae93a48c88ddd40","sha256_before":"cd823e19efcfe8ea3a3a87ee48f6d81d1a79be8206d77ffc75f2e6801c4f0f6c","sha256_after":"030cb1e4c329b0102c8ee29565fc053e9fc311dbb717757f99fde98d6fbb96de","uname_after":"root","gname_after":"root","mtime_before":"2022-02-14T17:21:37","mtime_after":"2022-02-15T09:02:21","inode_before":8833654,"inode_after":8816374,"diff":"17a18\n> #test-12\n","changed_attributes":["size","mtime","inode","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"}
and also at alerts.log:
- Date: Tue Feb 15 09:02:21 2022
- Inode: 8816374
- User: root (0)
- Group: root (0)
- MD5: 0b8927c0cbb4165911bf589944646e2e
- SHA1: fcffabdacad4262200bd65055ae93a48c88ddd40
- SHA256: 030cb1e4c329b0102c8ee29565fc053e9fc311dbb717757f99fde98d6fbb96de
What changed: 17a18
test-12
Then I've checked Kibana:
My ossec.conf for worker:
<!--
Wazuh - Manager - Default configuration for ol 8.5
More info at: https://documentation.wazuh.com
Mailing list: https://groups.google.com/forum/#!forum/wazuh
-->
<ossec_config>
<global>
<jsonout_output>yes</jsonout_output>
<alerts_log>yes</alerts_log>
<logall>no</logall>
<logall_json>no</logall_json>
<email_notification>yes</email_notification>
<smtp_server>x.x.x.x</smtp_server>
<email_from>wazuh-ww-1@xxxxx</email_from>
<email_maxperhour>12</email_maxperhour>
<email_log_source>alerts.log</email_log_source>
<!-- Emails List -->
<email_to>xxxx</email_to>
<!--
<email_to>xxxx</email_to>
<email_to>xxxx</email_to>
-->
<agents_disconnection_time>90</agents_disconnection_time>
<agents_disconnection_alert_time>120</agents_disconnection_alert_time>
</global>
<alerts>
<log_alert_level>3</log_alert_level>
<email_alert_level>8</email_alert_level>
</alerts>
<!-- Choose between "plain", "json", or "plain,json" for the format of internal logs -->
<logging>
<log_format>plain</log_format>
</logging>
<remote>
<connection>secure</connection>
<port>1514</port>
<protocol>tcp</protocol>
<queue_size>131072</queue_size>
<rids_closing_time>5m</rids_closing_time>
</remote>
<remote>
<connection>syslog</connection>
<port>514</port>
<protocol>udp</protocol>
<allowed-ips>x.x.x.x/24</allowed-ips>
</remote>
<syslog_output>
<level>7</level>
<server>x.x.x.x</server>
</syslog_output>
<!-- Policy monitoring -->
<rootcheck>
<disabled>no</disabled>
<check_files>yes</check_files>
<check_trojans>yes</check_trojans>
<check_dev>yes</check_dev>
<check_sys>yes</check_sys>
<check_pids>yes</check_pids>
<check_ports>yes</check_ports>
<check_if>yes</check_if>
<!-- Frequency that rootcheck is executed - every 12 hours -->
<frequency>43200</frequency>
<rootkit_files>etc/rootcheck/rootkit_files.txt</rootkit_files>
<rootkit_trojans>etc/rootcheck/rootkit_trojans.txt</rootkit_trojans>
<skip_nfs>yes</skip_nfs>
</rootcheck>
<wodle name="cis-cat">
<disabled>yes</disabled>
<timeout>1800</timeout>
<interval>1d</interval>
<scan-on-start>yes</scan-on-start>
<java_path>wodles/java</java_path>
<ciscat_path>wodles/ciscat</ciscat_path>
</wodle>
<!-- Osquery integration -->
<wodle name="osquery">
<disabled>yes</disabled>
<run_daemon>yes</run_daemon>
<log_path>/var/log/osquery/osqueryd.results.log</log_path>
<config_path>/etc/osquery/osquery.conf</config_path>
<add_labels>yes</add_labels>
</wodle>
<!-- System inventory -->
<wodle name="syscollector">
<disabled>no</disabled>
<interval>1h</interval>
<scan_on_start>yes</scan_on_start>
<hardware>yes</hardware>
<os>yes</os>
<network>yes</network>
<packages>yes</packages>
<ports all="no">yes</ports>
<processes>yes</processes>
<!-- Database synchronization settings -->
<synchronization>
<max_eps>10</max_eps>
</synchronization>
</wodle>
<sca>
<enabled>yes</enabled>
<scan_on_start>yes</scan_on_start>
<interval>12h</interval>
<skip_nfs>yes</skip_nfs>
</sca>
<vulnerability-detector>
<enabled>no</enabled>
<interval>5m</interval>
<ignore_time>6h</ignore_time>
<run_on_start>yes</run_on_start>
<!-- Ubuntu OS vulnerabilities -->
<provider name="canonical">
<enabled>no</enabled>
<os>trusty</os>
<os>xenial</os>
<os>bionic</os>
<os>focal</os>
<update_interval>1h</update_interval>
</provider>
<!-- Debian OS vulnerabilities -->
<provider name="debian">
<enabled>no</enabled>
<os>stretch</os>
<os>buster</os>
<update_interval>1h</update_interval>
</provider>
<!-- RedHat OS vulnerabilities -->
<provider name="redhat">
<enabled>no</enabled>
<os>5</os>
<os>6</os>
<os>7</os>
<os>8</os>
<update_interval>1h</update_interval>
</provider>
<!-- Windows OS vulnerabilities -->
<provider name="msu">
<enabled>yes</enabled>
<update_interval>1h</update_interval>
</provider>
<!-- Aggregate vulnerabilities -->
<provider name="nvd">
<enabled>yes</enabled>
<update_from_year>2010</update_from_year>
<update_interval>1h</update_interval>
</provider>
</vulnerability-detector>
<!-- File integrity monitoring -->
<syscheck>
<disabled>no</disabled>
<!-- Frequency that syscheck is executed default every 12 hours -->
<frequency>43200</frequency>
<scan_on_start>yes</scan_on_start>
<!-- Generate alert when new file detected -->
<alert_new_files>yes</alert_new_files>
<!-- Don't ignore files that change more than 'frequency' times -->
<auto_ignore frequency="10" timeframe="3600">no</auto_ignore>
<!-- Directories to check (perform all possible verifications) -->
<directories>/etc,/usr/bin,/usr/sbin</directories>
<directories>/bin,/sbin,/boot</directories>
<!-- Files/directories to ignore -->
<ignore>/etc/mtab</ignore>
<ignore>/etc/hosts.deny</ignore>
<ignore>/etc/mail/statistics</ignore>
<ignore>/etc/random-seed</ignore>
<ignore>/etc/random.seed</ignore>
<ignore>/etc/adjtime</ignore>
<ignore>/etc/httpd/logs</ignore>
<ignore>/etc/utmpx</ignore>
<ignore>/etc/wtmpx</ignore>
<ignore>/etc/cups/certs</ignore>
<ignore>/etc/dumpdates</ignore>
<ignore>/etc/svc/volatile</ignore>
<!-- File types to ignore -->
<ignore type="sregex">.log$|.swp$</ignore>
<!-- Check the file, but never compute the diff -->
<nodiff>/etc/ssl/private.key</nodiff>
<skip_nfs>yes</skip_nfs>
<skip_dev>yes</skip_dev>
<skip_proc>yes</skip_proc>
<skip_sys>yes</skip_sys>
<!-- Nice value for Syscheck process -->
<process_priority>10</process_priority>
<!-- Maximum output throughput -->
<max_eps>100</max_eps>
<!-- Database synchronization settings -->
<synchronization>
<enabled>yes</enabled>
<interval>5m</interval>
<max_interval>1h</max_interval>
<max_eps>10</max_eps>
</synchronization>
</syscheck>
<!-- Active response -->
<global>
<white_list>127.0.0.1</white_list>
<white_list>^localhost.localdomain$</white_list>
<white_list>x.x.x.x</white_list>
</global>
<command>
<name>disable-account</name>
<executable>disable-account</executable>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>restart-wazuh</name>
<executable>restart-wazuh</executable>
</command>
<command>
<name>firewall-drop</name>
<executable>firewall-drop</executable>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>host-deny</name>
<executable>host-deny</executable>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>route-null</name>
<executable>route-null</executable>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>win_route-null</name>
<executable>route-null.exe</executable>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>netsh</name>
<executable>netsh.exe</executable>
<timeout_allowed>yes</timeout_allowed>
</command>
<!--
<active-response>
active-response options here
</active-response>
-->
<!-- Log analysis -->
<localfile>
<log_format>command</log_format>
<command>df -P</command>
<frequency>360</frequency>
</localfile>
<localfile>
<log_format>full_command</log_format>
<command>netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d</command>
<alias>netstat listening ports</alias>
<frequency>360</frequency>
</localfile>
<localfile>
<log_format>full_command</log_format>
<command>last -n 20</command>
<frequency>360</frequency>
</localfile>
<ruleset>
<!-- Default ruleset -->
<decoder_dir>ruleset/decoders</decoder_dir>
<rule_dir>ruleset/rules</rule_dir>
<rule_exclude>0215-policy_rules.xml</rule_exclude>
<list>etc/lists/audit-keys</list>
<list>etc/lists/amazon/aws-eventnames</list>
<list>etc/lists/security-eventchannel</list>
<!-- User-defined ruleset -->
<decoder_dir>etc/decoders</decoder_dir>
<rule_dir>etc/rules</rule_dir>
</ruleset>
<rule_test>
<enabled>yes</enabled>
<threads>1</threads>
<max_sessions>64</max_sessions>
<session_timeout>15m</session_timeout>
</rule_test>
<!-- Configuration for wazuh-authd -->
<auth>
<disabled>no</disabled>
<port>1515</port>
<use_source_ip>yes</use_source_ip>
<force_insert>yes</force_insert>
<force_time>0</force_time>
<purge>yes</purge>
<use_password>no</use_password>
<ciphers>HIGH:!ADH:!EXP:!MD5:!RC4:!3DES:!CAMELLIA:@STRENGTH</ciphers>
<!-- <ssl_agent_ca></ssl_agent_ca> -->
<ssl_verify_host>no</ssl_verify_host>
<ssl_manager_cert>etc/sslmanager.cert</ssl_manager_cert>
<ssl_manager_key>etc/sslmanager.key</ssl_manager_key>
<ssl_auto_negotiate>no</ssl_auto_negotiate>
</auth>
<cluster>
<name>wazuh-test-cluster</name>
<node_name>worker-node-1</node_name>
<node_type>worker</node_type>
<key>xxx</key>
<port>1516</port>
<bind_addr>0.0.0.0</bind_addr>
<nodes>
<node>x.x.x.x</node>
</nodes>
<hidden>no</hidden>
<disabled>no</disabled>
</cluster>
</ossec_config>
<ossec_config>
<localfile>
<log_format>audit</log_format>
<location>/var/log/audit/audit.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/ossec/logs/active-responses.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/messages</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/secure</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/maillog</location>
</localfile>
</ossec_config>
The manager's conf is the same except
Cluster is OK: /var/ossec/bin/cluster_control -l
NAME TYPE VERSION ADDRESS
master-node master 4.2.5 x.x.x.1
worker-node-1 worker 4.2.5 x.x.x.2
Also, I've noticed that I get Windows "Registry Key Integrity Checksum Changed"/Deleted in Kibana(id 750).
But no "Integrity checksum changed."(id 550), though I get notifications of these events by e-mail.
My 0015-ossec_rules.xml is not modified. I use 0015-ossec_rules_custom.xml in etc/rules. Now it's commented(I thought it's troublemaker), but result is the same: Though "agent started/stopped" works fine.
<group name="fim, agent">
<!-- <rule id="503" level="3"> -->
<rule id="503" level="8" overwrite="yes">
<if_sid>500</if_sid>
<match>Agent started</match>
<description>Ossec agent started.</description>
<group>pci_dss_10.6.1,pci_dss_10.2.6,gpg13_10.1,gdpr_IV_35.7.d,hipaa_164.312.b,nist_800_53_AU.6,nist_800_53_AU.14,nist_800_53_AU.5,tsc_CC7.2,tsc_CC7.3,tsc_CC6.8,</group>
</rule>
<!-- <rule id="504" level="3"> -->
<rule id="504" level="8" overwrite="yes">
<if_sid>500</if_sid>
<match>Agent disconnected</match>
<description>Ossec agent disconnected.</description>
<mitre>
<id>T1089</id>
</mitre>
<group>pci_dss_10.6.1,pci_dss_10.2.6,gpg13_10.1,gdpr_IV_35.7.d,hipaa_164.312.b,nist_800_53_AU.6,nist_800_53_AU.14,nist_800_53_AU.5,tsc_CC7.2,tsc_CC7.3,tsc_CC6.8,</group>
</rule>
<!-- <rule id="505" level="3"> -->
<rule id="505" level="7" overwrite="yes">
<if_sid>500</if_sid>
<match>Agent removed</match>
<description>Ossec agent removed.</description>
<mitre>
<id>T1089</id>
</mitre>
<group>pci_dss_10.6.1,pci_dss_10.2.6,gpg13_10.1,gdpr_IV_35.7.d,hipaa_164.312.b,nist_800_53_AU.6,nist_800_53_AU.14,nist_800_53_AU.5,tsc_CC7.2,tsc_CC7.3,tsc_CC6.8,</group>
</rule>
<!-- <rule id="506" level="3"> -->
<rule id="506" level="8" overwrite="yes">
<if_sid>500</if_sid>
<match>Agent stopped</match>
<description>Ossec agent stopped.</description>
<mitre>
<id>T1089</id>
</mitre>
<group>pci_dss_10.6.1,pci_dss_10.2.6,gpg13_10.1,gdpr_IV_35.7.d,hipaa_164.312.b,nist_800_53_AU.6,nist_800_53_AU.14,nist_800_53_AU.5,tsc_CC7.2,tsc_CC7.3,tsc_CC6.8,</group>
</rule>
<!-- <rule id="550" level="7"> -->
<!--
<rule id="550" level="8" overwrite="yes">
<category>ossec</category>
<decoded_as>syscheck_integrity_changed</decoded_as>
<description>Integrity checksum changed.</description>
<mitre>
<id>T1492</id>
</mitre>
<group>syscheck,syscheck_entry_modified,syscheck_file,pci_dss_11.5,gpg13_4.11,gdpr_II_5.1.f,hipaa_164.312.c.1,hipaa_164.312.c.2,nist_800_53_SI.7,tsc_PI1.4,tsc_PI1.5,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,</group>
</rule>
-->
<!-- <rule id="553" level="7"> -->
<rule id="553" level="8" overwrite="yes">
<category>ossec</category>
<decoded_as>syscheck_deleted</decoded_as>
<description>File deleted.</description>
<mitre>
<id>T1107</id>
<id>T1485</id>
</mitre>
<group>syscheck,syscheck_entry_deleted,syscheck_file,pci_dss_11.5,gpg13_4.11,gdpr_II_5.1.f,hipaa_164.312.c.1,hipaa_164.312.c.2,nist_800_53_SI.7,tsc_PI1.4,tsc_PI1.5,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,</group>
</rule>
<!--
<rule id="554" level="5">
<category>ossec</category>
<decoded_as>syscheck_new_entry</decoded_as>
<description>File added to the system.</description>
<group>syscheck,syscheck_entry_added,syscheck_file,pci_dss_11.5,gpg13_4.11,gdpr_II_5.1.f,hipaa_164.312.c.1,hipaa_164.312.c.2,nist_800_53_SI.7,tsc_PI1.4,tsc_PI1.5,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,</group>
</rule>
<rule id="750" level="5">
<category>ossec</category>
<decoded_as>syscheck_registry_value_modified</decoded_as>
<group>syscheck,syscheck_entry_modified,syscheck_registry,pci_dss_11.5,gpg13_4.13,gdpr_II_5.1.f,hipaa_164.312.c.1,hipaa_164.312.c.2,nist_800_53_SI.7,tsc_PI1.4,tsc_PI1.5,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,</group>
<description>Registry Value Integrity Checksum Changed</description>
<mitre>
<id>T1492</id>
</mitre>
</rule>
<rule id="751" level="5">
<category>ossec</category>
<decoded_as>syscheck_registry_value_deleted</decoded_as>
<group>syscheck,syscheck_entry_deleted,syscheck_registry,pci_dss_11.5,gpg13_4.13,gdpr_II_5.1.f,hipaa_164.312.c.1,hipaa_164.312.c.2,nist_800_53_SI.7,tsc_PI1.4,tsc_PI1.5,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,</group>
<description>Registry Value Entry Deleted.</description>
<mitre>
<id>T1107</id>
<id>T1485</id>
</mitre>
</rule>
<rule id="752" level="5">
<category>ossec</category>
<decoded_as>syscheck_registry_value_added</decoded_as>
<group>syscheck,syscheck_entry_added,syscheck_registry,pci_dss_11.5,gpg13_4.13,gdpr_II_5.1.f,hipaa_164.312.c.1,hipaa_164.312.c.2,nist_800_53_SI.7,tsc_PI1.4,tsc_PI1.5,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,</group>
<description>Registry Value Entry Added to the System</description>
</rule>
-->
</group>
For the event you got:
{"timestamp":"2022-02-15T09:02:21.569+0600","rule":{"level":8,"description":"Integrity checksum changed.","id":"550","mitre":{"id":["T1492"],"tactic":["Impact"],"technique":["Stored Data Manipulation"]},"firedtimes":1,"mail":true,"groups":["fim"," agentsyscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"001","name":"linux-test-1","ip":"x.x.x.x"},"manager":{"name":"wazuh-test-w-1"},"id":"1644894141.15672","cluster":{"name":"wazuh-test-cluster","node":"worker-node-1"},"full_log":"File '/etc/mongod.conf' modified\nMode: realtime\nChanged attributes: size,mtime,inode,md5,sha1,sha256\nSize changed from '968' to '977'\nOld modification time was: '1644837697', now it is '1644894141'\nOld inode was: '8833654', now it is '8816374'\nOld md5sum was: 'f36fe4fe6822cb773a3e73592a761b6f'\nNew md5sum is : '0b8927c0cbb4165911bf589944646e2e'\nOld sha1sum was: '078a5d5c9e30e665c057ce8c1776f11068b3e852'\nNew sha1sum is : 'fcffabdacad4262200bd65055ae93a48c88ddd40'\nOld sha256sum was: 'cd823e19efcfe8ea3a3a87ee48f6d81d1a79be8206d77ffc75f2e6801c4f0f6c'\nNew sha256sum is : '030cb1e4c329b0102c8ee29565fc053e9fc311dbb717757f99fde98d6fbb96de'\n","syscheck":{"path":"/etc/mongod.conf","mode":"realtime","size_before":"968","size_after":"977","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_before":"f36fe4fe6822cb773a3e73592a761b6f","md5_after":"0b8927c0cbb4165911bf589944646e2e","sha1_before":"078a5d5c9e30e665c057ce8c1776f11068b3e852","sha1_after":"fcffabdacad4262200bd65055ae93a48c88ddd40","sha256_before":"cd823e19efcfe8ea3a3a87ee48f6d81d1a79be8206d77ffc75f2e6801c4f0f6c","sha256_after":"030cb1e4c329b0102c8ee29565fc053e9fc311dbb717757f99fde98d6fbb96de","uname_after":"root","gname_after":"root","mtime_before":"2022-02-14T17:21:37","mtime_after":"2022-02-15T09:02:21","inode_before":8833654,"inode_after":8816374,"diff":"17a18\n> #test-12\n","changed_attributes":["size","mtime","inode","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"}
it seems that has no syscheck
value for the rule.groups
field. The values for rule.groups
are ["fim"," agentsyscheck","syscheck_entry_modified","syscheck_file"]
. If you see the Modules/Integrity monitoring
section in the Wazuh app for Kibana, you will see that there is a filter to get the alerts whose rule.groups
field has the syscheck
value. The alert has not that value for the rule.group
field so it is not displayed in the UI, but the alert should be indexed if Filebeat is working as expected and the alert is logged in the /var/ossec/etc/alerts/alerts.json
. You could check that the alert is indexed going to the Modules/Security events
and add a filter for rule.id
is 550
, or using the Discover
plugin for Kibana.
In the custom 0015-ossec_rules_custom.xml
rule file, you overwrote the rule 550, replacing the value for group
(has no syscheck
). The original rule for Wazuh 4.2.5 with id 550
contains the syscheck
group as you can see here https://github.com/wazuh/wazuh/blob/v4.2.5/ruleset/rules/0015-ossec_rules.xml#L234.
If you need to overwrite the rule and you want this to appear under the Modules integrity monitoring
, you should be sure the generated alert match the filter of the module in the Wazuh app, for this alert, add the syscheck
in the group
key of rule definition.
Nope, my custom rule is(part of 550):
<!-- <rule id="550" level="7"> -->
<rule id="550" level="8" overwrite="yes">
<category>ossec</category>
<decoded_as>syscheck_integrity_changed</decoded_as>
<description>Integrity checksum changed.</description>
<mitre>
<id>T1492</id>
</mitre>
<group>syscheck,syscheck_entry_modified,syscheck_file,pci_dss_11.5,gpg13_4.11,gdpr_II_5.1.f,hipaa_164.312.c.1,hipaa_164.312.c.2,nist_800_53_SI.7,tsc_PI1.4,tsc_PI1.5,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,</group>
</rule>
You can see that "group" contains "syscheck". Also, my default 0015 config has the same "group".
Now what I get in alert.json:
{"timestamp":"2022-02-15T14:37:59.344+0600","rule":{"level":8,"description":"Integrity checksum changed.","id":"550","mitre":{"id":["T1492"],"tactic":["Impact"],"technique":["Stored Data Manipulation"]},"firedtimes":3,"mail":true,"groups":["fim"," agentsyscheck","syscheck_entry_modified","syscheck_file"]....
No syscheck as you can see. So why it happens?
But In Kibana's plugin there is such event:
You are right, but maybe do you have another file that could be overwriting the rule with id 550
.
In the Wazuh app for Kibana, go to Tools/API Console
and run the next request and share the output:
GET /rules?rule_ids=550
In the next example, I overwrote the rule in a custom file (local_rules.xml
, without the syscheck
group, but other files with the rule should appear.
GET /rules?rule_ids=550 Only two rules as I see, my custom and default. And as I understand, both show correct groups.
{
"data": {
"affected_items": [
{
"filename": "0015-ossec_rules_custom.xml",
"relative_dirname": "etc/rules",
"id": 550,
"level": 8,
"status": "enabled",
"details": {
"overwrite": "yes",
"category": "ossec",
"decoded_as": "syscheck_integrity_changed"
},
"pci_dss": [
"11.5"
],
"gpg13": [
"4.11"
],
"gdpr": [
"II_5.1.f"
],
"hipaa": [
"164.312.c.1",
"164.312.c.2"
],
"nist_800_53": [
"SI.7"
],
"tsc": [
"PI1.4",
"PI1.5",
"CC6.1",
"CC6.8",
"CC7.2",
"CC7.3"
],
"mitre": [
"T1492"
],
"groups": [
"syscheck",
"syscheck_entry_modified",
"syscheck_file",
"ossec"
],
"description": "Integrity checksum changed."
},
{
"filename": "0015-ossec_rules.xml",
"relative_dirname": "ruleset/rules",
"id": 550,
"level": 7,
"status": "enabled",
"details": {
"category": "ossec",
"decoded_as": "syscheck_integrity_changed"
},
"pci_dss": [
"11.5"
],
"gpg13": [
"4.11"
],
"gdpr": [
"II_5.1.f"
],
"hipaa": [
"164.312.c.1",
"164.312.c.2"
],
"nist_800_53": [
"SI.7"
],
"tsc": [
"PI1.4",
"PI1.5",
"CC6.1",
"CC6.8",
"CC7.2",
"CC7.3"
],
"mitre": [
"T1492"
],
"groups": [
"syscheck",
"syscheck_entry_modified",
"syscheck_file",
"ossec"
],
"description": "Integrity checksum changed."
}
],
"total_affected_items": 2,
"total_failed_items": 0,
"failed_items": []
},
"message": "All selected rules were returned",
"error": 0
}
It seems the rule definition in the different files have the syscheck
as the value for group
.
Try to restart the managers and generate the alert with id 550. Review the generated alert in the /var/ossec/logs/alerts/alerts.json
of Wazuh manager where the agent is reporting if the rule.group
field has the syscheck
value.
Take into account that the previously generated alerts were indexed with their value, so if you change some rule definition, it will be applied to the new alerts after restarting the manager.
The alert you displayed has the rule.groups
have as values ["fim"," agentsyscheck","syscheck_entry_modified","syscheck_file"]
. The fim
and agentsyscheck
are not displayed as the value in the rule definition you got with the API request. It is strange.
I've resterted my manager and worker.
Generated event(550); checked in alerts.json:
{"timestamp":"2022-02-15T16:22:41.995+0600","rule":{"level":8,"description":"Integrity checksum changed.","id":"550","mitre":{"id":["T1492"],"tactic":["Impact"],"technique":["Stored Data Manipulation"]},"firedtimes":1,"mail":true,"groups":["ossec","syscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"001","name":"linux-test-1","ip":"x.x.x.x"},"manager":{"name":"wazuh-test-m-1"},"id":"1644920561.1418578","cluster":{"name":"wazuh-test-cluster","node":"master-node"},"full_log":"File '/etc/mongod.conf' modified\nMode: realtime\nChanged attributes: size,mtime,inode,md5,sha1,sha256\nSize changed from '1036' to '1043'\nOld modification time was: '1644915133', now it is '1644920561'\nOld inode was: '8580179', now it is '8580181'\nOld md5sum was: '64886c1bd5561d2ed4adeadba5188836'\nNew md5sum is : '56ce5790bc58be7e4e396cf9a65729e7'\nOld sha1sum was: '1e6096c1ab93f5360975d40bce440028f2fb743a'\nNew sha1sum is : '46a48f16d999d5f030862585403e66cd3883146e'\nOld sha256sum was: '0af61238b75cb601c450fd24e1dd022e883fd74881cccd1f6261b1352dd29b57'\nNew sha256sum is : '7ac62a91289312766da0cada465843762be4e9388eb809b61e1eaa8a90b448aa'\n","syscheck":{"path":"/etc/mongod.conf","mode":"realtime","size_before":"1036","size_after":"1043","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_before":"64886c1bd5561d2ed4adeadba5188836","md5_after":"56ce5790bc58be7e4e396cf9a65729e7","sha1_before":"1e6096c1ab93f5360975d40bce440028f2fb743a","sha1_after":"46a48f16d999d5f030862585403e66cd3883146e","sha256_before":"0af61238b75cb601c450fd24e1dd022e883fd74881cccd1f6261b1352dd29b57","sha256_after":"7ac62a91289312766da0cada465843762be4e9388eb809b61e1eaa8a90b448aa","uname_after":"root","gname_after":"root","mtime_before":"2022-02-15T14:52:13","mtime_after":"2022-02-15T16:22:41","inode_before":8580179,"inode_after":8580181,"diff":"25c25\n< #\n---\n> #test-19\n","changed_attributes":["size","mtime","inode","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"}
Looks its correct now:
"groups":["ossec","syscheck","syscheck_entry_modified","syscheck_file"]
Yep, Kibana now shows this event!
Tho olnly thing I haven't understood: I've restarted mangers many times, ossec rule was the same all the time. Only thing that I've changed is "json_output", even then I've restarted wazuh manager, so what was wrong from beginning?
Anyway, thank you so much!
The only issue for now, that my solaris agent doesn't generate any Integrity Changed alerts at all:
But I get e-mail and alerts.json events when agent is stopped/started(503/506).
Solaris agent was installed with this article(Solaris 11 i386): https://documentation.wazuh.com/current/installation-guide/wazuh-agent/wazuh-agent-package-solaris.html
Threre is no firewall also and I can see it's connected in Kibana UI.
Linux and Solaris agents use the same agent.conf:
<agent_config>
<!-- Shared agent configuration here -->
<client>
<notify_time>60</notify_time>
<time-reconnect>60</time-reconnect>
</client>
<syscheck>
<frequency>43200</frequency>
<directories check_all="yes" realtime="yes" report_changes="yes">/etc</directories>
</syscheck>
</agent_config>
Could you be so kind to help me with it?
Tho olnly thing I haven't understood: I've restarted mangers many times, ossec rule was the same all the time. Only thing that I've changed is "json_output", even then I've restarted wazuh manager, so what was wrong from beginning?
When you have a cluster of Wazuh managers, if you edit a custom rule of a worker manager, these changes will be replaced by the centralized configuration in the master manager after a time due to synchronization. Maybe you edited the rule file located in a worker, but the manager had a wrong rule definition and an alert was generated with the wrong rule.groups
as fim
or agentsyscheck
. If you are editing the local files of managers of the custom rules, decoders or lists, do it in the master
manager node.
The only issue for now, that my solaris agent doesn't generate any Integrity Changed alerts at all: not in alerts.json/alerts.log; not by e-mail.
Maybe there is some type of problem with the Solaris agent. You could check if there are related logs to the Solaris Wazuh agent seeing its logs:
cat /var/ossec/etc/ossec.log
or filtering by errors/warnings:
cat /var/ossec/etc/ossec.log | grep -i -E "err|warn"
or fitler by syscheck
module:
cat /var/ossec/etc/ossec.log | grep -i syscheck
You could be interested to get the configuration for the syscheck
module for the agent. You could use the Wazuh API. In the Wazuh app for Kibana, go to Tools/API Console
and do the next request:
GET /agents/<AGENT_ID>/config/syscheck/syscheck
Replacing the <AGENT_ID>
by the id of agent you want to query, the Solaris agent. For example, if the Solaris agent has as id the 001
, the request is:
GET /agents/001/config/syscheck/syscheck
What version of the Wazuh agent do you have installed on Solaris machine?
The original issue seems to be solved, related to displaying the alerts in Kibana, and you could have another problem with the Solaris agent that is not generating alerts of syscheck
module. If you consider that the problem is different from the subject of the issue, you could want to open a new issue in the proper Wazuh repository to get a better context and that the issue thread could help others users with the same problem.
Thank you for answering.
The original issue seems to be solved, related to displaying the alerts in Kibana, and you could have another problem with the Solaris agent that is not generating alerts of syscheck module. If you consider that the problem is different from the subject of the issue, you could want to open a new issue in the proper Wazuh repository to get a better context and that the issue thread could help others users with the same problem.
I pretty sure it's some how connected, I just don't know is it... The sympthoms are pretty same.
What version of the Wazuh agent do you have installed on Solaris machine?
4.2.5 like the linux and win agents in my lab.
cat /var/ossec/etc/ossec.log | grep -i -E "err|warn"
Only these:
2022/02/15 17:59:23 rootcheck: ERROR: No rootcheck_files file: 'etc/shared/rootkit_files.txt'
2022/02/15 17:59:23 rootcheck: ERROR: No rootcheck_trojans file: 'etc/shared/rootkit_trojans.txt'
cat /var/ossec/etc/ossec.log | grep -i syscheck
2022/02/15 17:59:22 wazuh-syscheckd: INFO: Started (pid: 1722).
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6003): Monitoring path: '/bin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'.
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6003): Monitoring path: '/boot', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'.
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6003): Monitoring path: '/etc', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | report_changes | realtime'.
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6003): Monitoring path: '/sbin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'.
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6003): Monitoring path: '/usr/bin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'.
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6003): Monitoring path: '/usr/sbin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'.
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/mtab'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/hosts.deny'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/mail/statistics'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/random-seed'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/random.seed'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/adjtime'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/httpd/logs'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/utmpx'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/wtmpx'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/cups/certs'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/dumpdates'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/svc/volatile'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6207): Ignore 'file' sregex '.log$|.swp$'
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6004): No diff for file: '/etc/ssl/private.key'
2022/02/15 17:59:22 wazuh-syscheckd: WARNING: (6908): Ignoring flag for real time monitoring on directory: '/etc'.
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6000): Starting daemon...
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 43200 seconds
2022/02/15 17:59:22 wazuh-syscheckd: INFO: (6008): File integrity monitoring scan started.
2022/02/15 17:59:55 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended.
This is strange, what is wrong with realtime="yes" parameter?
WARNING: (6908): Ignoring flag for real time monitoring on directory: '/etc'.
GET /agents/003/config/syscheck/syscheck
{
"data": {
"syscheck": {
"disabled": "no",
"frequency": 43200,
"skip_nfs": "yes",
"skip_dev": "yes",
"skip_sys": "yes",
"skip_proc": "yes",
"scan_on_start": "yes",
"max_files_per_second": 0,
"file_limit": {
"enabled": "yes",
"entries": 100000
},
"diff": {
"disk_quota": {
"enabled": "yes",
"limit": 1048576
},
"file_size": {
"enabled": "yes",
"limit": 51200
}
},
"directories": [
{
"opts": [
"check_md5sum",
"check_sha1sum",
"check_perm",
"check_size",
"check_owner",
"check_group",
"check_mtime",
"check_inode",
"check_sha256sum"
],
"dir": "/bin",
"recursion_level": 256,
"diff_size_limit": 51200
},
{
"opts": [
"check_md5sum",
"check_sha1sum",
"check_perm",
"check_size",
"check_owner",
"check_group",
"check_mtime",
"check_inode",
"check_sha256sum"
],
"dir": "/boot",
"recursion_level": 256,
"diff_size_limit": 51200
},
{
"opts": [
"check_md5sum",
"check_sha1sum",
"check_perm",
"check_size",
"check_owner",
"check_group",
"check_mtime",
"check_inode",
"report_changes",
"check_sha256sum"
],
"dir": "/etc",
"recursion_level": 256,
"diff_size_limit": 51200
},
{
"opts": [
"check_md5sum",
"check_sha1sum",
"check_perm",
"check_size",
"check_owner",
"check_group",
"check_mtime",
"check_inode",
"check_sha256sum"
],
"dir": "/sbin",
"recursion_level": 256,
"diff_size_limit": 51200
},
{
"opts": [
"check_md5sum",
"check_sha1sum",
"check_perm",
"check_size",
"check_owner",
"check_group",
"check_mtime",
"check_inode",
"check_sha256sum"
],
"dir": "/usr/bin",
"recursion_level": 256,
"diff_size_limit": 51200
},
{
"opts": [
"check_md5sum",
"check_sha1sum",
"check_perm",
"check_size",
"check_owner",
"check_group",
"check_mtime",
"check_inode",
"check_sha256sum"
],
"dir": "/usr/sbin",
"recursion_level": 256,
"diff_size_limit": 51200
}
],
"nodiff": [
"/etc/ssl/private.key"
],
"ignore": [
"/etc/mtab",
"/etc/hosts.deny",
"/etc/mail/statistics",
"/etc/random-seed",
"/etc/random.seed",
"/etc/adjtime",
"/etc/httpd/logs",
"/etc/utmpx",
"/etc/wtmpx",
"/etc/cups/certs",
"/etc/dumpdates",
"/etc/svc/volatile"
],
"ignore_sregex": [
".log$|.swp$"
],
"whodata": {
"restart_audit": "yes",
"startup_healthcheck": "yes"
},
"allow_remote_prefilter_cmd": "no",
"synchronization": {
"enabled": "yes",
"max_interval": 3600,
"interval": 300,
"response_timeout": 30,
"queue_size": 16384,
"max_eps": 10
},
"max_eps": 100,
"process_priority": 10,
"database": "disk"
}
},
"error": 0
}
Yes, you are wright, it's not connected to Kibana issue, It's subject for another case.
Thank you very much for your help!
Hello.
I have similar(https://github.com/wazuh/wazuh-kibana-app/issues/1534) issue: no alerts shown in Kibana(regarding agents, for example, I can see all wazuh servers events, but no agents info).
My Wazuh's version is 4.2 and I've done clear installation for distributed deployment:
Everything works just expected(email alerts notifications), but I don't see any alert/events in Kibana.
Logstash not present on my hosts.
Filebeat is running(both manager and worker).
I get email notifications normally and via syslog and can see alerts in /log/alerts.
curl https://xxxx:9200/_cat/indices/wazuh-alerts-* -u admin:xxxx -k
curl -XGET 'https://xxxx:9200/_cat/indices' -k -u admin:xxxx
Kibana shows such errors when trying to search agent events:
My kibana conf:
My ES config: