Closed tXambe closed 4 years ago
Hi @tXambe !
The Wazuh app is set by default to connect to https:/localhost:55000
with the default credentials foo:bar
. If you changed any of these or the Wazuh manager/api is running in a different machine, you will need to edit the wazuh.yml
.
This file can be found here: /usr/share/kibana/optimize/wazuh/config/wazuh.yml
(please let me know if you are using a Wazuh version older than 3.12 as this path is different in older versions)
Please edit the wazuh.yml
, and add new API entries as needed in your hosts
section, it should look like this:
There you have to enter the url, port, user and password needed to connect to your Wazuh API.
Please let me know if you have any questions!
Best Regards, Pablo Torres
Hi @tXambe !
The Wazuh app is set by default to connect to
https:/localhost:55000
with the default credentialsfoo:bar
. If you changed any of these or the Wazuh manager/api is running in a different machine, you will need to edit thewazuh.yml
.This file can be found here:
/usr/share/kibana/optimize/wazuh/config/wazuh.yml
(please let me know if you are using a Wazuh version older than 3.12 as this path is different in older versions) Please edit thewazuh.yml
, and add new API entries as needed in yourhosts
section, it should look like this: There you have to enter the url, port, user and password needed to connect to your Wazuh API.Please let me know if you have any questions!
Best Regards, Pablo Torres
Hello,
I don't have the rute wazuh/config/wazuh.yml .
My versions:
cat /usr/share/kibana/plugins/wazuh/package.json | grep version "version": "3.10.2", "version": "7.4.2"
Kibana --> Kibana 7.4.2
And last comment, when execute in the portal to check connection API I have this error:
Settings. 3005 - Some Wazuh daemons are not ready in node 'node01' (wazuh-modulesd->failed) (/api/check-api)
Thanks and a greeting / Un saludo
Hi @tXambe ! The Wazuh app is set by default to connect to
https:/localhost:55000
with the default credentialsfoo:bar
. If you changed any of these or the Wazuh manager/api is running in a different machine, you will need to edit thewazuh.yml
. This file can be found here:/usr/share/kibana/optimize/wazuh/config/wazuh.yml
(please let me know if you are using a Wazuh version older than 3.12 as this path is different in older versions) Please edit thewazuh.yml
, and add new API entries as needed in yourhosts
section, it should look like this: There you have to enter the url, port, user and password needed to connect to your Wazuh API. Please let me know if you have any questions! Best Regards, Pablo TorresHello,
I don't have the rute wazuh/config/wazuh.yml . I have one config.yml inside the rute /usr/share/kibana/plugins/wazuh but everyting inside this files is commented
My versions:
cat /usr/share/kibana/plugins/wazuh/package.json | grep version "version": "3.10.2", "version": "7.4.2"
Kibana --> Kibana 7.4.2
And last comment, when execute in the portal to check connection API I have this error:
Settings. 3005 - Some Wazuh daemons are not ready in node 'node01' (wazuh-modulesd->failed) (/api/check-api)
Thanks and a greeting / Un saludo
Hi @tXambe,
Yes, wazuh.yml
doesn't exist in that 3.10.2 version, it was added in newer 3.11+ versions.
I recommend you to use the latest version available as it has new features and bug fixes.
As that error log mentions, Settings. 3005 - Some Wazuh daemons are not ready in node 'node01' (wazuh-modulesd->failed) (/api/check-api)
there's something wrong with your Wazuh manager/API, let's enable the debug mode so we can get some extra details about that error:
This is how you can enable the debug mode in Wazuh API:
/var/ossec/api/configuration/config.js
) and replace this line:
config.logs = "info";
with
config.logs = "debug";
systemctl restart wazuh-manager
systemctl restart wazuh-api
And now please check the Wazuh API logs:
tail -n 50 /var/ossec/logs/api.log
Please share the output of that command with me so I can give you further assistance,, thanks!
Best Regards, Pablo Torres
Hello,
The result Wazuh API logs ` tail -n 50 /var/ossec/logs/api.log WazuhAPI 2020-05-20 13:22:03 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:22:05 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:22:07 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:22:10 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:22:12 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:22:14 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:22:16 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:22:18 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:22:21 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:22:23 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:24:03 gfisiem: [::ffff:1.1.1.1] GET /manager/info? - 200 - error: '1017'. WazuhAPI 2020-05-20 13:25:19 gfisiem: [::ffff:1.1.1.1] GET /manager/info? - 200 - error: '1017'. WazuhAPI 2020-05-20 13:25:20 gfisiem: [::ffff:1.1.1.1] GET /agents/summary? - 200 - error: '1017'. WazuhAPI 2020-05-20 13:25:23 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:25:25 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:25:27 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:25:29 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:25:31 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:25:33 gfisiem: [::ffff:1.1.1.1] GET /manager/info? - 200 - error: '1017'. WazuhAPI 2020-05-20 13:25:33 gfisiem: [::ffff:1.1.1.1] GET /version? - 200 - error: '0'. WazuhAPI 2020-05-20 13:25:34 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:25:34 gfisiem: [::ffff:1.1.1.1] GET /agents/000? - 200 - error: '1017'. WazuhAPI 2020-05-20 13:25:36 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:25:38 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:25:40 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:25:43 gfisiem: [::ffff:1.1.1.1] GET /manager/status? - 200 - error: '0'. WazuhAPI 2020-05-20 13:26:08 gfisiem: [::ffff:1.1.1.1] GET /version? - 200 - error: '0'. WazuhAPI 2020-05-20 13:26:09 gfisiem: [::ffff:1.1.1.1] GET /agents/000? - 200 - error: '1017'. WazuhAPI 2020-05-20 13:30:01 gfisiem: [::ffff:1.1.1.1] GET /agents/?offset=0&limit=1&q=id!%3D000 - 200 - error: '1017'. WazuhAPI 2020-05-20 13:30:02 gfisiem: [::ffff:1.1.1.1] GET /cluster/status? - 200 - error: '1017'. WazuhAPI 2020-05-20 13:31:08 gfisiem: [::ffff:1.1.1.1] User: "gfisiem" - Authentication failed. WazuhAPI 2020-05-20 13:34:18 gfisiem: [::ffff:1.1.1.1] GET /manager/info? - 200 - error: '1017'. WazuhAPI 2020-05-20 13:34:18 gfisiem: [::ffff:1.1.1.1] GET /manager/info? - 200 - error: '1017'. WazuhAPI 2020-05-20 13:34:18 gfisiem: [::ffff:1.1.1.1] GET /version? - 200 - error: '0'. WazuhAPI 2020-05-20 13:34:19 gfisiem: [::ffff:1.1.1.1] GET /agents/000? - 200 - error: '1017'. WazuhAPI 2020-05-20 13:45:01 gfisiem: [::ffff:1.1.1.1] GET /agents/?offset=0&limit=1&q=id!%3D000 - 200 - error: '1017'. WazuhAPI 2020-05-20 13:45:02 gfisiem: [::ffff:1.1.1.1] GET /cluster/status? - 200 - error: '1017'. WazuhAPI 2020-05-20 14:00:01 gfisiem: [::ffff:1.1.1.1] GET /agents/?offset=0&limit=1&q=id!%3D000 - 200 - error: '1017'. WazuhAPI 2020-05-20 14:00:02 gfisiem: [::ffff:1.1.1.1] GET /cluster/status? - 200 - error: '1017'. WazuhAPI 2020-05-20 14:15:01 gfisiem: [::ffff:1.1.1.1] GET /agents/?offset=0&limit=1&q=id!%3D000 - 200 - error: '1017'. WazuhAPI 2020-05-20 14:15:02 gfisiem: [::ffff:1.1.1.1] GET /cluster/status? - 200 - error: '1017'. WazuhAPI 2020-05-20 14:30:01 gfisiem: [::ffff:1.1.1.1] GET /agents/?offset=0&limit=1&q=id!%3D000 - 200 - error: '1017'. WazuhAPI 2020-05-20 14:30:02 gfisiem: [::ffff:1.1.1.1] GET /cluster/status? - 200 - error: '1017'. WazuhAPI 2020-05-20 14:45:01 gfisiem: [::ffff:1.1.1.1] GET /agents/?offset=0&limit=1&q=id!%3D000 - 200 - error: '1017'. WazuhAPI 2020-05-20 14:45:02 gfisiem: [::ffff:1.1.1.1] GET /cluster/status? - 200 - error: '1017'. WazuhAPI 2020-05-20 15:00:02 gfisiem: [::ffff:1.1.1.1] GET /agents/?offset=0&limit=1&q=id!%3D000 - 200 - error: '1017'. WazuhAPI 2020-05-20 15:00:03 gfisiem: [::ffff:1.1.1.1] GET /cluster/status? - 200 - error: '1017'. WazuhAPI 2020-05-20 15:15:01 gfisiem: [::ffff:1.1.1.1] GET /agents/?offset=0&limit=1&q=id!%3D000 - 200 - error: '1017'. WazuhAPI 2020-05-20 15:15:02 gfisiem: [::ffff:1.1.1.1] GET /cluster/status? - 200 - error: '1017'. WazuhAPI 2020-05-20 15:16:11 : Listening on: https://:::55000
`
Hi @tXambe ,
Did you enable the debug mode ( with config.logs="debug";
, and restarting Wazuh-manager/API?
Please, run this request: (replace WAZUH_SERVER_IP with the IP of the machine where Wazuh is installed and also replace the user and password with your credentials)
curl WAZUH_SERVER_IP:55000/cluster/status -u user:password
Once you run this request, please share with me again the logs of the Wazuh API:
tail -n 50 /var/ossec/logs/api.log
Best Regards, Pablo Torres
Hello,
The result of curl:
` curl https://1.1.1.1:55000/cluster/status -u user1 Enter host password for user 'user1': curl: (60) Peer's certificate issuer has been marked as not trusted by the user. More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option.
`
TAIL
{"message": "Some Wazuh daemons are not ready in node 'node01' (wazuh-modulesd->failed)", "error": 1017}
{"message": "Some Wazuh daemons are not ready in node 'node01' (wazuh-modulesd->failed)", "error": 1017}
WazuhAPI 2020-05-20 15:30:11 gfisiem: CMD - STDOUT: 105 bytes WazuhAPI 2020-05-20 15:30:11 gfisiem: Some Wazuh daemons are not ready in node 'node01' (wazuh-modulesd->failed) WazuhAPI 2020-05-20 15:30:11 gfisiem: [::ffff:1.1.1.1] GET /cluster/status? - 200 - error: '1017'.`
Hello,
In case it can help
/var/ossec/bin/ossec-control status wazuh-clusterd not running... wazuh-modulesd: Process 27448 not used by Wazuh, removing... wazuh-modulesd not running... ossec-monitord is running... ossec-logcollector is running... ossec-remoted is running... ossec-syscheckd is running... ossec-analysisd is running... ossec-maild not running... ossec-execd is running... wazuh-db is running... ossec-authd is running... ossec-agentlessd not running... ossec-integratord not running... ossec-dbd not running... ossec-csyslogd not running...
And
sqlite3 /var/ossec/var/db/global.db SQLite version 3.7.17 2013-05-20 00:56:22 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> SELECT * FROM agent; 0|siemgfipro|127.0.0.1|127.0.0.1||CentOS Linux|7.7|7|7|||centos|Linux |spro |3.10.0-1062.4.1.el7.x86_64 |#1 SMP Fri Oct 18 17:15:30 UTC 2019 |x86_64|x86_64|Wazuh v3.10.2|||spro|node01|157348566|253402300799|updated|0|0|
Thanks for the output @tXambe ,
Let's now check the logs in ossec.log
, please run this request and share the output with me:
tail -n 200 /var/ossec/logs/ossec.log
2020/05/20 12:13:47 ossec-analysisd: INFO: Ignoring file: '/etc/wtmpx'
2020/05/20 12:13:47 ossec-analysisd: INFO: Ignoring file: '/etc/cups/certs'
2020/05/20 12:13:47 ossec-analysisd: INFO: Ignoring file: '/etc/dumpdates'
2020/05/20 12:13:47 ossec-analysisd: INFO: Ignoring file: '/etc/svc/volatile'
2020/05/20 12:13:47 ossec-analysisd: INFO: Ignoring file: '/sys/kernel/security'
2020/05/20 12:13:47 ossec-analysisd: INFO: Ignoring file: '/sys/kernel/debug'
2020/05/20 12:13:47 ossec-analysisd: INFO: Ignoring file: '/dev/core'
2020/05/20 12:13:47 ossec-analysisd: INFO: Ignoring file: '^/proc'
2020/05/20 12:13:47 ossec-analysisd: INFO: Ignoring file: '.log$|.swp$'
2020/05/20 12:13:47 ossec-analysisd: INFO: Started (pid: 23787).
2020/05/20 12:13:47 ossec-logcollector: INFO: Monitoring output of command(360): df -P
2020/05/20 12:13:47 ossec-logcollector: INFO: Monitoring full output of command(360): netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d
2020/05/20 12:13:47 ossec-logcollector: INFO: Monitoring full output of command(360): last -n 20
2020/05/20 12:13:47 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/audit/audit.log'.
2020/05/20 12:13:47 ossec-logcollector: INFO: (1950): Analyzing file: '/var/ossec/logs/active-responses.log'.
2020/05/20 12:13:47 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/messages'.
2020/05/20 12:13:47 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/secure'.
2020/05/20 12:13:47 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/maillog'.
2020/05/20 12:13:47 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/remote/pfSense.log'.
2020/05/20 12:13:47 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/remote/watchguard.log'.
2020/05/20 12:13:47 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/remote/fortigate.log'.
2020/05/20 12:13:47 ossec-logcollector: INFO: Started (pid: 23808).
2020/05/20 12:13:47 sca: INFO: Starting Security Configuration Assessment scan.
2020/05/20 12:13:47 wazuh-modulesd:syscollector: INFO: Module started.
2020/05/20 12:13:47 wazuh-modulesd:oscap: INFO: Module started.
2020/05/20 12:13:47 wazuh-modulesd:oscap: INFO: Starting evaluation.
2020/05/20 12:13:47 sca: INFO: Starting evaluation of policy: '/var/ossec/ruleset/sca/cis_rhel7_linux.yml'
2020/05/20 12:13:47 ossec-remoted: INFO: (4111): Maximum number of agents allowed: '14000'.
2020/05/20 12:13:47 ossec-remoted: INFO: (1410): Reading authentication keys file.
2020/05/20 12:13:47 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Ubuntu Bionic database update...
2020/05/20 12:13:48 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2020/05/20 12:13:49 wazuh-modulesd:vulnerability-detector: ERROR: (5402): Could not load the CVE OVAL for BIONIC. XMLERR: Attribute 'B.;9▒@A▒BB▒▒▒▒' has no value.
2020/05/20 12:13:49 wazuh-modulesd:vulnerability-detector: ERROR: (5426): CVE database could not be updated.
2020/05/20 12:13:49 wazuh-modulesd:vulnerability-detector: INFO: (5452): Starting vulnerability scanning.
2020/05/20 12:13:50 ossec-syscheckd: INFO: Started (pid: 23794).
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6003): Monitoring directory: '/etc', with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6003): Monitoring directory: '/usr/bin', with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6003): Monitoring directory: '/usr/sbin', with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6003): Monitoring directory: '/bin' (/usr/bin), with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6003): Monitoring directory: '/sbin' (/usr/sbin), with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6003): Monitoring directory: '/boot', with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/mtab'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/hosts.deny'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/mail/statistics'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/random-seed'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/random.seed'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/adjtime'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/httpd/logs'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/utmpx'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/wtmpx'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/cups/certs'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/dumpdates'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/svc/volatile'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/sys/kernel/security'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/sys/kernel/debug'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/dev/core'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6207): Ignore 'file' sregex '^/proc'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6207): Ignore 'file' sregex '.log$|.swp$'
2020/05/20 12:13:50 ossec-syscheckd: INFO: (6004): No diff for file: '/etc/ssl/private.key'
2020/05/20 12:13:50 rootcheck: INFO: Started (pid: 23794).
2020/05/20 12:13:57 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2020/05/20 12:14:02 wazuh-modulesd:vulnerability-detector: INFO: (5453): Vulnerability scanning finished.
2020/05/20 12:14:03 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Ubuntu Xenial database update...
2020/05/20 12:14:04 wazuh-modulesd:vulnerability-detector: ERROR: (5402): Could not load the CVE OVAL for XENIAL. XMLERR: Attribute 'k▒▒▒' not followed by a " or '.
2020/05/20 12:14:04 wazuh-modulesd:vulnerability-detector: ERROR: (5426): CVE database could not be updated.
2020/05/20 12:14:05 ossec-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 43200 seconds
2020/05/20 12:14:05 ossec-syscheckd: INFO: (6008): File integrity monitoring scan started.
2020/05/20 12:14:05 rootcheck: INFO: Starting rootcheck scan.
2020/05/20 12:14:05 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Ubuntu Trusty database update...
2020/05/20 12:14:06 wazuh-modulesd:vulnerability-detector: ERROR: (5402): Could not load the CVE OVAL for TRUSTY. XMLERR: Attribute 'U▒▒f9"▒*▒i▒▒1Z%[▒)B▒wVW▒▒▒▒j▒a▒▒' has no value.
2020/05/20 12:14:06 wazuh-modulesd:vulnerability-detector: ERROR: (5426): CVE database could not be updated.
2020/05/20 12:14:07 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Debian Stretch database update...
2020/05/20 12:14:08 sca: INFO: Evaluation finished for policy '/var/ossec/ruleset/sca/cis_rhel7_linux.yml'
2020/05/20 12:14:09 sca: INFO: Security Configuration Assessment scan finished. Duration: 22 seconds.
2020/05/20 12:15:22 ossec-syscheckd: INFO: (6009): File integrity monitoring scan ended.
2020/05/20 12:15:27 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Debian Jessie database update...
2020/05/20 12:16:34 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Debian Wheezy database update...
2020/05/20 12:16:37 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Red Hat Enterprise Linux database update...
2020/05/20 12:17:03 wazuh-modulesd:vulnerability-detector: ERROR: (5493): The version of '(null)' could not be extracted.
2020/05/20 12:31:03 ossec-logcollector: WARNING: Target 'agent' message queue is full (1024). Log lines may be lost.
2020/05/20 13:13:44 rootcheck: INFO: Ending rootcheck scan.
2020/05/20 15:15:29 ossec-monitord: INFO: (1225): SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
2020/05/20 15:15:30 ossec-logcollector: INFO: (1225): SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
2020/05/20 15:15:30 ossec-remoted: INFO: (1225): SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
2020/05/20 15:15:30 ossec-syscheckd: INFO: (1225): SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
2020/05/20 15:15:31 ossec-analysisd: INFO: (1225): SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
2020/05/20 15:15:31 ossec-execd: INFO: (1314): Shutdown received. Deleting responses.
2020/05/20 15:15:31 ossec-execd: INFO: (1225): SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
2020/05/20 15:15:31 wazuh-db: INFO: (1225): SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
2020/05/20 15:15:32 ossec-authd: INFO: (1225): SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
2020/05/20 15:15:33 ossec-authd: INFO: Exiting...
2020/05/20 15:15:40 ossec-csyslogd: INFO: Remote syslog server not configured. Clean exit.
2020/05/20 15:15:40 ossec-dbd: INFO: Database not configured. Clean exit.
2020/05/20 15:15:40 ossec-integratord: INFO: Remote integrations not configured. Clean exit.
2020/05/20 15:15:40 ossec-agentlessd: INFO: Not configured. Exiting.
2020/05/20 15:15:40 ossec-authd: INFO: Started (pid: 27382).
2020/05/20 15:15:40 ossec-authd: INFO: Accepting connections on port 1515. No password required.
2020/05/20 15:15:40 wazuh-db: INFO: Started (pid: 27388).
2020/05/20 15:15:40 ossec-execd: INFO: Started (pid: 27405).
2020/05/20 15:15:40 ossec-authd: INFO: Setting network timeout to 1.000000 sec.
2020/05/20 15:15:40 ossec-remoted: INFO: Started (pid: 27430). Listening on port 1514/TCP (secure).
2020/05/20 15:15:40 ossec-monitord: INFO: Started (pid: 27442).
2020/05/20 15:15:40 wazuh-modulesd: INFO: Process started.
2020/05/20 15:15:40 wazuh-modulesd:ciscat: INFO: Module disabled. Exiting...
2020/05/20 15:15:40 wazuh-modulesd:osquery: INFO: Module disabled. Exiting...
2020/05/20 15:15:40 wazuh-modulesd:docker-listener: INFO: Module docker-listener started.
2020/05/20 15:15:40 wazuh-modulesd:docker-listener: INFO: Starting to listening Docker events.
2020/05/20 15:15:40 sca: INFO: Module started.
2020/05/20 15:15:40 sca: INFO: Loaded policy '/var/ossec/ruleset/sca/cis_rhel7_linux.yml'
2020/05/20 15:15:40 wazuh-modulesd:database: INFO: Module started.
2020/05/20 15:15:40 wazuh-modulesd:download: INFO: Module started
2020/05/20 15:15:42 ossec-analysisd: INFO: Total rules enabled: '4137'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/mtab'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/hosts.deny'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/mail/statistics'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/random-seed'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/random.seed'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/adjtime'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/httpd/logs'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/utmpx'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/wtmpx'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/cups/certs'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/dumpdates'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/etc/svc/volatile'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/sys/kernel/security'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/sys/kernel/debug'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '/dev/core'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '^/proc'
2020/05/20 15:15:42 ossec-analysisd: INFO: Ignoring file: '.log$|.swp$'
2020/05/20 15:15:42 ossec-analysisd: INFO: Started (pid: 27413).
2020/05/20 15:15:42 ossec-logcollector: INFO: Monitoring output of command(360): df -P
2020/05/20 15:15:42 ossec-logcollector: INFO: Monitoring full output of command(360): netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d
2020/05/20 15:15:42 ossec-logcollector: INFO: Monitoring full output of command(360): last -n 20
2020/05/20 15:15:42 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/audit/audit.log'.
2020/05/20 15:15:42 ossec-logcollector: INFO: (1950): Analyzing file: '/var/ossec/logs/active-responses.log'.
2020/05/20 15:15:42 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/messages'.
2020/05/20 15:15:42 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/secure'.
2020/05/20 15:15:42 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/maillog'.
2020/05/20 15:15:42 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/remote/pfSense.log'.
2020/05/20 15:15:42 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/remote/watchguard.log'.
2020/05/20 15:15:42 ossec-logcollector: INFO: (1950): Analyzing file: '/var/log/remote/fortigate.log'.
2020/05/20 15:15:42 ossec-logcollector: INFO: Started (pid: 27436).
2020/05/20 15:15:42 sca: INFO: Starting Security Configuration Assessment scan.
2020/05/20 15:15:42 sca: INFO: Starting evaluation of policy: '/var/ossec/ruleset/sca/cis_rhel7_linux.yml'
2020/05/20 15:15:42 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Ubuntu Bionic database update...
2020/05/20 15:15:42 wazuh-modulesd:syscollector: INFO: Module started.
2020/05/20 15:15:42 wazuh-modulesd:oscap: INFO: Module started.
2020/05/20 15:15:42 wazuh-modulesd:oscap: INFO: Starting evaluation.
2020/05/20 15:15:43 ossec-remoted: INFO: (4111): Maximum number of agents allowed: '14000'.
2020/05/20 15:15:43 ossec-remoted: INFO: (1410): Reading authentication keys file.
2020/05/20 15:15:43 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2020/05/20 15:15:44 wazuh-modulesd:vulnerability-detector: ERROR: (5402): Could not load the CVE OVAL for BIONIC. XMLERR: Attribute '▒▒.[Z$L▒▒CD▒▒?()▒5LƦ▒▒b' not followed by a " or '.
2020/05/20 15:15:44 wazuh-modulesd:vulnerability-detector: ERROR: (5426): CVE database could not be updated.
2020/05/20 15:15:44 wazuh-modulesd:vulnerability-detector: INFO: (5452): Starting vulnerability scanning.
2020/05/20 15:15:45 ossec-syscheckd: INFO: Started (pid: 27422).
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6003): Monitoring directory: '/etc', with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6003): Monitoring directory: '/usr/bin', with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6003): Monitoring directory: '/usr/sbin', with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6003): Monitoring directory: '/bin' (/usr/bin), with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6003): Monitoring directory: '/sbin' (/usr/sbin), with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6003): Monitoring directory: '/boot', with options 'perm | size | owner | group | md5sum | sha1sum | sha256sum | mtime | inode'.
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/mtab'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/hosts.deny'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/mail/statistics'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/random-seed'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/random.seed'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/adjtime'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/httpd/logs'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/utmpx'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/wtmpx'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/cups/certs'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/dumpdates'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/svc/volatile'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/sys/kernel/security'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/sys/kernel/debug'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6206): Ignore 'file' entry '/dev/core'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6207): Ignore 'file' sregex '^/proc'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6207): Ignore 'file' sregex '.log$|.swp$'
2020/05/20 15:15:45 ossec-syscheckd: INFO: (6004): No diff for file: '/etc/ssl/private.key'
2020/05/20 15:15:45 rootcheck: INFO: Started (pid: 27422).
2020/05/20 15:15:52 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2020/05/20 15:15:58 sca: INFO: Evaluation finished for policy '/var/ossec/ruleset/sca/cis_rhel7_linux.yml'
2020/05/20 15:15:59 sca: INFO: Security Configuration Assessment scan finished. Duration: 17 seconds.
2020/05/20 15:16:00 ossec-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 43200 seconds
2020/05/20 15:16:00 ossec-syscheckd: INFO: (6008): File integrity monitoring scan started.
2020/05/20 15:16:00 rootcheck: INFO: Starting rootcheck scan.
2020/05/20 15:16:08 wazuh-modulesd:vulnerability-detector: INFO: (5453): Vulnerability scanning finished.
2020/05/20 15:16:10 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Ubuntu Xenial database update...
2020/05/20 15:16:11 wazuh-modulesd:vulnerability-detector: ERROR: (5402): Could not load the CVE OVAL for XENIAL. XMLERR: Attribute '▒▒?▒Uh▒@▒▒▒C▒▒▒▒▒▒▒;v▒s▒S▒o▒5▒B▒▒▒2N▒▒▒▒▒k▒▒▒▒^Q▒▒▒' has no value.
2020/05/20 15:16:11 wazuh-modulesd:vulnerability-detector: ERROR: (5426): CVE database could not be updated.
2020/05/20 15:16:12 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Ubuntu Trusty database update...
2020/05/20 15:16:13 wazuh-modulesd:vulnerability-detector: ERROR: (5402): Could not load the CVE OVAL for TRUSTY. XMLERR: Attribute '!J' has no value.
2020/05/20 15:16:13 wazuh-modulesd:vulnerability-detector: ERROR: (5426): CVE database could not be updated.
2020/05/20 15:16:14 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Debian Stretch database update...
2020/05/20 15:16:15 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Debian Jessie database update...
2020/05/20 15:16:16 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Debian Wheezy database update...
2020/05/20 15:16:17 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Red Hat Enterprise Linux database update...
2020/05/20 15:16:41 wazuh-modulesd:vulnerability-detector: ERROR: (5493): The version of '(null)' could not be extracted.
2020/05/20 15:17:10 ossec-syscheckd: INFO: (6009): File integrity monitoring scan ended.
2020/05/20 15:30:04 ossec-logcollector: WARNING: Target 'agent' message queue is full (1024). Log lines may be lost.
Thanks, it looks that there's something wrong with the vulnerability-detector module, could you please share with me your Wazuh configuration?
You can find it here:
cat /var/ossec/etc/ossec.conf
<!--
Wazuh - Manager - Default configuration for centos 7.7
More info at: https://documentation.wazuh.com
Mailing list: https://groups.google.com/forum/#!forum/wazuh
-->
<ossec_config>
<global>
<jsonout_output>yes</jsonout_output>
<alerts_log>yes</alerts_log>
<logall>no</logall>
<logall_json>no</logall_json>
<email_notification>no</email_notification>
<smtp_server>smtp.example.wazuh.com</smtp_server>
<email_from>ossecm@example.wazuh.com</email_from>
<email_to>recipient@example.wazuh.com</email_to>
<email_maxperhour>12</email_maxperhour>
<email_log_source>alerts.log</email_log_source>
</global>
<alerts>
<log_alert_level>3</log_alert_level>
<email_alert_level>12</email_alert_level>
</alerts>
<!-- Choose between "plain", "json", or "plain,json" for the format of internal logs -->
<logging>
<log_format>plain</log_format>
</logging>
<remote>
<connection>secure</connection>
<port>1514</port>
<protocol>tcp</protocol>
<queue_size>131072</queue_size>
</remote>
<!-- Policy monitoring -->
<rootcheck>
<disabled>no</disabled>
<check_files>yes</check_files>
<check_trojans>yes</check_trojans>
<check_dev>yes</check_dev>
<check_sys>yes</check_sys>
<check_pids>yes</check_pids>
<check_ports>yes</check_ports>
<check_if>yes</check_if>
<!-- Frequency that rootcheck is executed - every 12 hours -->
<frequency>43200</frequency>
<rootkit_files>/var/ossec/etc/rootcheck/rootkit_files.txt</rootkit_files>
<rootkit_trojans>/var/ossec/etc/rootcheck/rootkit_trojans.txt</rootkit_trojans>
<skip_nfs>yes</skip_nfs>
</rootcheck>
<wodle name="open-scap">
<disabled>no</disabled>
<timeout>1800</timeout>
<interval>1d</interval>
<scan-on-start>yes</scan-on-start>
<content path="ssg-centos-7-ds.xml" type="xccdf">
<profile>xccdf_org.ssgproject.content_profile_pci-dss</profile>
<profile>xccdf_org.ssgproject.content_profile_common</profile>
</content>
</wodle>
<wodle name="cis-cat">
<disabled>yes</disabled>
<timeout>1800</timeout>
<interval>1d</interval>
<scan-on-start>yes</scan-on-start>
<java_path>wodles/java</java_path>
<ciscat_path>wodles/ciscat</ciscat_path>
</wodle>
<!-- Osquery integration -->
<wodle name="osquery">
<disabled>yes</disabled>
<run_daemon>yes</run_daemon>
<log_path>/var/log/osquery/osqueryd.results.log</log_path>
<config_path>/etc/osquery/osquery.conf</config_path>
<add_labels>yes</add_labels>
</wodle>
<!-- System inventory -->
<wodle name="syscollector">
<disabled>no</disabled>
<interval>1h</interval>
<scan_on_start>yes</scan_on_start>
<hardware>yes</hardware>
<os>yes</os>
<network>yes</network>
<packages>yes</packages>
<ports all="no">yes</ports>
<processes>yes</processes>
</wodle>
<wodle name="docker-listener">
<disabled>no</disabled>
</wodle>
<sca>
<enabled>yes</enabled>
<scan_on_start>yes</scan_on_start>
<interval>12h</interval>
<skip_nfs>yes</skip_nfs>
</sca>
<wodle name="vulnerability-detector">
<disabled>no</disabled>
<interval>5m</interval>
<ignore_time>6h</ignore_time>
<run_on_start>yes</run_on_start>
<feed name="ubuntu-18">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
<feed name="ubuntu-16">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
<feed name="ubuntu-14">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
<feed name="redhat">
<disabled>no</disabled>
<update_from_year>2010</update_from_year>
<update_interval>1h</update_interval>
</feed>
<feed name="debian-9">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
<feed name="debian-8">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
<feed name="debian-7">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
</wodle>
<!-- Osquery integration -->
<!-- File integrity monitoring -->
<syscheck>
<disabled>no</disabled>
<!-- Frequency that syscheck is executed default every 12 hours -->
<frequency>43200</frequency>
<scan_on_start>yes</scan_on_start>
<!-- Generate alert when new file detected -->
<alert_new_files>yes</alert_new_files>
<!-- Don't ignore files that change more than 'frequency' times -->
<auto_ignore frequency="10" timeframe="3600">no</auto_ignore>
<!-- Directories to check (perform all possible verifications) -->
<directories check_all="yes">/etc,/usr/bin,/usr/sbin</directories>
<directories check_all="yes">/bin,/sbin,/boot</directories>
<!-- Files/directories to ignore -->
<ignore>/etc/mtab</ignore>
<ignore>/etc/hosts.deny</ignore>
<ignore>/etc/mail/statistics</ignore>
<ignore>/etc/random-seed</ignore>
<ignore>/etc/random.seed</ignore>
<ignore>/etc/adjtime</ignore>
<ignore>/etc/httpd/logs</ignore>
<ignore>/etc/utmpx</ignore>
<ignore>/etc/wtmpx</ignore>
<ignore>/etc/cups/certs</ignore>
<ignore>/etc/dumpdates</ignore>
<ignore>/etc/svc/volatile</ignore>
<ignore>/sys/kernel/security</ignore>
<ignore>/sys/kernel/debug</ignore>
<ignore>/dev/core</ignore>
<!-- File types to ignore -->
<ignore type="sregex">^/proc</ignore>
<ignore type="sregex">.log$|.swp$</ignore>
<!-- Check the file, but never compute the diff -->
<nodiff>/etc/ssl/private.key</nodiff>
<skip_nfs>yes</skip_nfs>
</syscheck>
<!-- Active response -->
<global>
<white_list>127.0.0.1</white_list>
<white_list>^localhost.localdomain$</white_list>
<white_list>172.16.10.190</white_list>
<white_list>172.16.10.191</white_list>
</global>
<command>
<name>disable-account</name>
<executable>disable-account.sh</executable>
<expect>user</expect>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>restart-ossec</name>
<executable>restart-ossec.sh</executable>
<expect/>
</command>
<command>
<name>firewall-drop</name>
<executable>firewall-drop.sh</executable>
<expect>srcip</expect>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>host-deny</name>
<executable>host-deny.sh</executable>
<expect>srcip</expect>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>route-null</name>
<executable>route-null.sh</executable>
<expect>srcip</expect>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>win_route-null</name>
<executable>route-null.cmd</executable>
<expect>srcip</expect>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>win_route-null-2012</name>
<executable>route-null-2012.cmd</executable>
<expect>srcip</expect>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>netsh</name>
<executable>netsh.cmd</executable>
<expect>srcip</expect>
<timeout_allowed>yes</timeout_allowed>
</command>
<command>
<name>netsh-win-2016</name>
<executable>netsh-win-2016.cmd</executable>
<expect>srcip</expect>
<timeout_allowed>yes</timeout_allowed>
</command>
<!--
<active-response>
active-response options here
</active-response>
-->
<!-- Log analysis -->
<localfile>
<log_format>command</log_format>
<command>df -P</command>
<frequency>360</frequency>
</localfile>
<localfile>
<log_format>full_command</log_format>
<command>netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d</command>
<alias>netstat listening ports</alias>
<frequency>360</frequency>
</localfile>
<localfile>
<log_format>full_command</log_format>
<command>last -n 20</command>
<frequency>360</frequency>
</localfile>
<ruleset>
<!-- Default ruleset -->
<decoder_dir>ruleset/decoders</decoder_dir>
<rule_dir>ruleset/rules</rule_dir>
<rule_exclude>0215-policy_rules.xml</rule_exclude>
<rule_exclude>0540-pfsense_rules.xml</rule_exclude>
<rule_exclude>0390-fortigate_rules.xml</rule_exclude>
<list>etc/lists/audit-keys</list>
<list>etc/lists/amazon/aws-eventnames</list>
<list>etc/lists/security-eventchannel</list>
<!-- User-defined ruleset -->
<decoder_dir>etc/decoders</decoder_dir>
<rule_dir>etc/rules</rule_dir>
<list>etc/lists/blacklist-alienvault</list>
</ruleset>
<!-- Configuration for ossec-authd -->
<auth>
<disabled>no</disabled>
<port>1515</port>
<use_source_ip>yes</use_source_ip>
<force_insert>yes</force_insert>
<force_time>0</force_time>
<purge>yes</purge>
<use_password>no</use_password>
<limit_maxagents>yes</limit_maxagents>
<ciphers>HIGH:!ADH:!EXP:!MD5:!RC4:!3DES:!CAMELLIA:@STRENGTH</ciphers>
<!-- <ssl_agent_ca></ssl_agent_ca>
-->
<ssl_verify_host>no</ssl_verify_host>
<ssl_manager_cert>/var/ossec/etc/sslmanager.cert</ssl_manager_cert>
<ssl_manager_key>/var/ossec/etc/sslmanager.key</ssl_manager_key>
<ssl_auto_negotiate>no</ssl_auto_negotiate>
</auth>
<cluster>
<name>wazuh</name>
<node_name>node01</node_name>
<node_type>master</node_type>
<key/>
<port>1516</port>
<bind_addr>0.0.0.0</bind_addr>
<nodes>
<node>NODE_IP</node>
</nodes>
<hidden>no</hidden>
<disabled>yes</disabled>
</cluster>
</ossec_config>
<ossec_config>
<localfile>
<log_format>audit</log_format>
<location>/var/log/audit/audit.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/ossec/logs/active-responses.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/messages</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/secure</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/maillog</location>
</localfile>
--! servicios remotos recogido mediante rsyslog -->
<localfile>
<log_format>syslog</log_format>
<location>/var/log/remote/pfSense.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/remote/watchguard.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/remote/fortigate.log</location>
</localfile>
</ossec_config>
Hi @tXambe ,
is that the content of your ossec.conf
configuration? that doesn't look a correct configuration, it should look something like this: https://documentation.wazuh.com/3.12/user-manual/reference/ossec-conf/,
Please, check again the ossec.conf and make sure the syntax is correct
Hello, this is my ossec.conf, I will check it and tell you how it went. But this file It has not been mocked, is what I do not understand.
Hello, I think the problem is with two CVE OVAL but I don't know how to fix it:
2020/05/21 10:45:55 wazuh-modulesd:vulnerability-detector: ERROR: (5402): Could not load the CVE OVAL for BIONIC. XMLERR: Attribute '▒▒F~ܴ▒D' not followed by a " or '.
2020/05/21 10:45:55 wazuh-modulesd:vulnerability-detector: ERROR: (5426): CVE database could not be updated.
2020/05/21 10:45:55 wazuh-modulesd:vulnerability-detector: INFO: (5452): Starting vulnerability scanning.
2020/05/21 10:45:59 wazuh-modulesd:vulnerability-detector: INFO: (5453): Vulnerability scanning finished.
2020/05/21 10:46:00 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Ubuntu Xenial database update...
2020/05/21 10:46:01 wazuh-modulesd:vulnerability-detector: ERROR: (5402): Could not load the CVE OVAL for XENIAL. XMLERR: Attribute $' has no value.
2020/05/21 10:46:01 wazuh-modulesd:vulnerability-detector: ERROR: (5426): CVE database could not be updated.
2020/05/21 10:46:02 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Ubuntu Trusty database update...
2020/05/21 10:46:03 wazuh-modulesd:vulnerability-detector: ERROR: (5402): Could not load the CVE OVAL for TRUSTY. XMLERR: Attribute '9▒▒▒▒7{▒▒▒_}▒;▒▒6▒▒▒▒▒Y▒l' not followed by a " or '.
2020/05/21 10:46:03 wazuh-modulesd:vulnerability-detector: ERROR: (5426): CVE database could not be updated.
2020/05/21 10:46:04 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Debian Stretch database update...
2020/05/21 10:46:05 wazuh-modulesd:syscollector: INFO: Evaluation finished.
2020/05/21 10:46:06 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Debian Jessie database update...
2020/05/21 10:46:07 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Debian Wheezy database update...
2020/05/21 10:46:08 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Red Hat Enterprise Linux database update...
2020/05/21 10:46:13 sca: INFO: Evaluation finished for policy '/var/ossec/ruleset/sca/cis_rhel7_linux.yml'
2020/05/21 10:46:14 sca: INFO: Security Configuration Assessment scan finished. Duration: 20 seconds.
2020/05/21 10:46:36 wazuh-modulesd:vulnerability-detector: ERROR: (5493): The version of '(null)' could not be extracted.
Hi @tXambe !
I checked the configuration you shared with me in this message: https://github.com/wazuh/wazuh/issues/5065#issuecomment-631499409 I just noticed that Github formatted the configuration as it's in XML and that's why I couldn't see it correctly. Please, try inserting everything in a code block so the format is maintained, https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks
Regarding the errors in your ossec.log, it looks like vulnerability-detector configuration is not ok. Did you add any vulnerability-detector configuration which you can share with me? Can't see any configuration related vulnerability-detector in the ossec.conf
you shared with me
Hello, I don't modified nothing inside the ossec.conf file , the content of vulnerability-detector:
<wodle name="vulnerability-detector">
<disabled>no</disabled>
<interval>5m</interval>
<ignore_time>6h</ignore_time>
<run_on_start>yes</run_on_start>
<feed name="ubuntu-18">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
<feed name="ubuntu-16">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
<feed name="ubuntu-14">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
<feed name="redhat">
<disabled>no</disabled>
<update_from_year>2010</update_from_year>
<update_interval>1h</update_interval>
</feed>
<feed name="debian-9">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
<feed name="debian-8">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
<feed name="debian-7">
<disabled>no</disabled>
<update_interval>1h</update_interval>
</feed>
</wodle>
I have this error when execute dmesg -T
[jue abr 16 19:29:07 2020] Out of memory: Kill process 2388 (ossec-analysisd) score 914 or sacrifice child
[jue abr 16 19:29:07 2020] Killed process 2388 (ossec-analysisd), UID 997, total-vm:23127300kB, anon-rss:14710332kB, file-rss:0kB, shmem-rss:0kB
[lun may 18 16:20:07 2020] wazuh-modulesd[7543]: segfault at 0 ip 00007f113449ec70 sp 00007f1130f2da58 error 4 in libc-2.17.so[7f113435e000+1c3000]
[lun may 18 16:38:43 2020] wazuh-modulesd[8424]: segfault at 0 ip 00007f92dfab0c70 sp 00007f92dcd40a58 error 4 in libc-2.17.so[7f92df970000+1c3000]
[mar may 19 08:08:51 2020] wazuh-modulesd[17739]: segfault at 0 ip 00007f49ff448c70 sp 00007f49ef7fda58 error 4 in libc-2.17.so[7f49ff308000+1c3000]
[mar may 19 10:47:26 2020] wazuh-modulesd[22890]: segfault at 0 ip 00007f966180dc70 sp 00007f965e4b2a58 error 4 in libc-2.17.so[7f96616cd000+1c3000]
[mar may 19 12:27:26 2020] wazuh-modulesd[25966]: segfault at 0 ip 00007f25e8e16c70 sp 00007f25e17f9a58 error 4 in libc-2.17.so[7f25e8cd6000+1c3000]
[mar may 19 12:31:23 2020] wazuh-modulesd[26771]: segfault at 0 ip 00007f979d7ddc70 sp 00007f979a482a58 error 4 in libc-2.17.so[7f979d69d000+1c3000]
[mar may 19 12:39:53 2020] wazuh-modulesd[27690]: segfault at 0 ip 00007faf099e6c70 sp 00007faf0668ba58 error 4 in libc-2.17.so[7faf098a6000+1c3000]
[mié may 20 11:05:55 2020] wazuh-modulesd[21856]: segfault at 0 ip 00007fa766871c70 sp 00007fa763300a58 error 4 in libc-2.17.so[7fa766731000+1c3000]
[mié may 20 12:11:46 2020] wazuh-modulesd[23829]: segfault at 0 ip 00007f33f4976c70 sp 00007f33f161ba58 error 4 in libc-2.17.so[7f33f4836000+1c3000]
[mié may 20 15:11:23 2020] wazuh-modulesd[27459]: segfault at 0 ip 00007f03e2ca2c70 sp 00007f03ceffca58 error 4 in libc-2.17.so[7f03e2b62000+1c3000]
[jue may 21 08:59:56 2020] wazuh-modulesd[5441]: segfault at 0 ip 00007fa76c812c70 sp 00007fa7692a1a58 error 4 in libc-2.17.so[7fa76c6d2000+1c3000]
[jue may 21 09:02:07 2020] wazuh-modulesd[5946]: segfault at 0 ip 00007fc0ed84fc70 sp 00007fc0e4ff8a58 error 4 in libc-2.17.so[7fc0ed70f000+1c3000]
[jue may 21 09:03:11 2020] wazuh-modulesd[6453]: segfault at 0 ip 00007f4f1c4fcc70 sp 00007f4f191a1a58 error 4 in libc-2.17.so[7f4f1c3bc000+1c3000]
[jue may 21 09:03:54 2020] wazuh-modulesd[6933]: segfault at 0 ip 00007f8889b79c70 sp 00007f888681ea58 error 4 in libc-2.17.so[7f8889a39000+1c3000]
[jue may 21 09:09:58 2020] wazuh-modulesd[7628]: segfault at 0 ip 00007fb806b18c70 sp 00007fb7feffca58 error 4 in libc-2.17.so[7fb8069d8000+1c3000]
[jue may 21 09:39:50 2020] wazuh-modulesd[10243]: segfault at 0 ip 00007ff43b011c70 sp 00007ff42b7fda58 error 4 in libc-2.17.so[7ff43aed1000+1c3000]
[jue may 21 10:19:27 2020] wazuh-modulesd[13925]: segfault at 0 ip 00007f4c0596dc70 sp 00007f4bfdffaa58 error 4 in libc-2.17.so[7f4c0582d000+1c3000]
[jue may 21 10:39:04 2020] wazuh-modulesd[15041]: segfault at 0 ip 00007fa209c0cc70 sp 00007fa2068b1a58 error 4 in libc-2.17.so[7fa209acc000+1c3000]
[jue may 21 10:41:14 2020] wazuh-modulesd[15594]: segfault at 0 ip 00007fc1f049ac70 sp 00007fc1ecf29a58 error 4 in libc-2.17.so[7fc1f035a000+1c3000]
[jue may 21 11:32:17 2020] wazuh-modulesd[19484]: segfault at 0 ip 00007f47dc591c70 sp 00007f47d9236a58 error 4 in libc-2.17.so[7f47dc451000+1c3000]
[jue may 21 12:00:59 2020] wazuh-modulesd[20785]: segfault at 0 ip 00007fc49fd0ac70 sp 00007fc49c9afa58 error 4 in libc-2.17.so[7fc49fbca000+1c3000]
/var/ossec/bin/ossec-control status
wazuh-clusterd not running...
wazuh-modulesd not running...
ossec-monitord is running...
ossec-logcollector is running...
ossec-remoted is running...
ossec-syscheckd is running...
ossec-analysisd is running...
ossec-maild not running...
ossec-execd is running...
wazuh-db is running...
ossec-authd is running...
ossec-agentlessd not running...
ossec-integratord not running...
ossec-dbd not running...
ossec-csyslogd not running...
Hello,
In this thread https://github.com/wazuh/wazuh-kibana-app/issues/2194#issuecomment-613261101
make reference to that "problem may be related to a bug in vulnerability detector, the Redhat feed database " and recommended upgrade to the version 3.12.2
Hi @tXambe ,
Yes, in that old version of Wazuh there's a bug in the vulnerability detector full details can be found here: https://github.com/wazuh/wazuh/issues/4884 That bug was fixed in #4885 and released in Wazuh 3.12.2. It's recommended to upgrade to that version or if it's no possible, please try disabling the vulnerability-detector configuration
Hello,
I will try to update to version 3.12 as you recommend, ¿the follow link it would be the right way?
¿I will lose the indexes or the kibana dashboard configuration?
https://documentation.wazuh.com/3.12/upgrade-guide/upgradinglatest_wazuh3_minor.html#upgrading-latest-minor
And second option, ¿Can you say me how "disabling the vulnerability-detector configuration"?
Thanks very much
Hi @tXambe,
The upgrade guide can be found here https://documentation.wazuh.com/3.12/upgrade-guide/upgrading/latest_wazuh3_minor.html, please follow the steps carefully to make sure the upgrade is successful. This guide will help you to upgrade Wazuh manager and Wazuh agents to v3.12. You will also have to upgrade the Elastic stack (Elasticsearch, Filebeat, and Kibana) and the Wazuh app, the guide to upgrade ELK can be found here: https://documentation.wazuh.com/3.12/upgrade-guide/upgrading-elastic-stack/elastic_server_minor_upgrade.html If you follow these guides, no data stored in your Elasticsearch indices will be lost, this data will only be lost if you manually delete it or you uninstall Elasticsearch which is not necessary.
Anyway, it's always recommended to make a copy of your config files (elasticsearch.yml, kibana.yml, ossec.conf...) just in case something goes wrong during the upgrade and make backups of your Elasticsearch indices, more info here: https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html
To disable vulnerability-detector, just set the <disable>
option to yes
and restart the services to apply changes
Best Regards, Pablo Torres
Hello,
Reeding the documentation of upgrade of ELK there are some initial steps that are executed with curl, but for http, my configuration is with https, so executing curl -k ¿Will have I any problem?
Thanks a greeting
Hi @tXambe .
Yes if you are using https
you can run it with -k
To disable vulnerability-detector, you will need to change
<disabled>no</disabled>
with
<disabled>yes</disabled>
and then restart the services
Regards,
Hello,
Just now finish the update of wazuh-manager and API , I have a the first problem/error when start the wazuh-manager I have this error:
-- Unit wazuh-manager.service has begun starting up.
may 22 13:54:48 spro env[32688]: 2020/05/22 13:54:48 ossec-analysisd: ERROR: Duplicate rule ID:81600
may 22 13:54:48 spro env[32688]: 2020/05/22 13:54:48 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'etc/rules/fortigate_rules.xml'.
may 22 13:54:48 spro env[32688]: ossec-analysisd: Configuration error. Exiting
may 22 13:54:48 spro systemd[1]: wazuh-manager.service: control process exited, code=exited status=1
may 22 13:54:48 spro polkitd[1218]: Unregistered Authentication Agent for unix-process:32682:588514647 (system bus name :1.3945, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale es_
may 22 13:54:48 spro systemd[1]: Failed to start Wazuh manager.
-- Subject: Unit wazuh-manager.service has failed
-- Defined-By: systemd
But inside of rules onlu have one
/var/ossec/etc/rules
[@spro rules]# ls -la
total 40
drwxrwx---. 2 root ossec 137 may 22 13:54 .
drwxrwx---. 7 ossec ossec 4096 may 22 13:11 ..
-rw-rw----. 1 ossec ossec 17490 dic 17 08:37 fortigate_rules.xml
-rw-rw----. 1 ossec ossec 1017 dic 16 13:26 local_rules.xml
-rw-rw----. 1 ossec ossec 1687 dic 17 08:37 pfsense_rules.xml
-rw-rw----. 1 ossec ossec 218 nov 12 2019 strongswan_rules.xml
-rw-rw----. 1 ossec ossec 3383 dic 17 08:39 watchguard_rules.xml
Hi @tXambe,
It looks like the rule with id: 81600 is duplicated, let's check the files that contain that rule_id, please run this command
grep -r "\"81600\"" /var/ossec/ruleset/rules
Hello @pablotr9 ,
[@spro rules]# grep -r "\"81600\"" /var/ossec/ruleset/rules
/var/ossec/ruleset/rules/0390-fortigate_rules.xml: <rule id="81600" level="0">
I'm search in /var/ossec/ruleset/rules and only appears one 0390-fortigate_rules.xml
Hi @tXambe,
Could you please check if the ruleset is duplicated in these two different paths?
ls -la /var/ossec/etc/rules/
and
ls -la /var/ossec/ruleset/rules/
Hi @tXambe,
Could you please check if the ruleset is duplicated in these two different paths?
ls -la /var/ossec/etc/rules/
andls -la /var/ossec/ruleset/rules/
Hello,
ls -la /var/ossec/etc/rules/
total 40
drwxrwx---. 2 root ossec 137 may 22 13:54 .
drwxrwx---. 7 ossec ossec 4096 may 22 13:11 ..
-rw-rw----. 1 ossec ossec 17490 dic 17 08:37 fortigate_rules.xml
-rw-rw----. 1 ossec ossec 1017 dic 16 13:26 local_rules.xml
-rw-rw----. 1 ossec ossec 1687 dic 17 08:37 pfsense_rules.xml
-rw-rw----. 1 ossec ossec 218 nov 12 2019 strongswan_rules.xml
-rw-rw----. 1 ossec ossec 3383 dic 17 08:39 watchguard_rules.xml
ls -la /var/ossec/ruleset/rules/ total 1260 drwxr-x---. 2 root ossec 8192 may 22 12:18 . drwxr-x---. 5 root ossec 61 may 22 12:18 .. -rw-r-----. 1 root ossec 1552 abr 29 12:11 0010-rules_config.xml -rw-r-----. 1 root ossec 13896 abr 29 12:11 0015-ossec_rules.xml -rw-r-----. 1 root ossec 4731 abr 29 12:11 0016-wazuh_rules.xml -rw-r-----. 1 root ossec 29953 abr 29 12:11 0020-syslog_rules.xml -rw-r-----. 1 root ossec 5741 abr 29 12:11 0025-sendmail_rules.xml -rw-r-----. 1 root ossec 8729 abr 29 12:11 0030-postfix_rules.xml -rw-r-----. 1 root ossec 849 abr 29 12:11 0035-spamd_rules.xml -rw-r-----. 1 root ossec 1795 abr 29 12:11 0040-imapd_rules.xml -rw-r-----. 1 root ossec 1335 abr 29 12:11 0045-mailscanner_rules.xml -rw-r-----. 1 root ossec 1535 abr 29 12:11 0050-ms-exchange_rules.xml -rw-r-----. 1 root ossec 2360 abr 29 12:11 0055-courier_rules.xml -rw-r-----. 1 root ossec 1188 abr 29 12:11 0060-firewall_rules.xml -rw-r-----. 1 root ossec 9047 abr 29 12:11 0065-pix_rules.xml -rw-r-----. 1 root ossec 4359 abr 29 12:11 0070-netscreenfw_rules.xml -rw-r-----. 1 root ossec 2648 abr 29 12:11 0075-cisco-ios_rules.xml -rw-r-----. 1 root ossec 2844 abr 29 12:11 0080-sonicwall_rules.xml -rw-r-----. 1 root ossec 4516 abr 29 12:11 0085-pam_rules.xml -rw-r-----. 1 root ossec 1747 abr 29 12:11 0090-telnetd_rules.xml -rw-r-----. 1 root ossec 16570 abr 29 12:11 0095-sshd_rules.xml -rw-r-----. 1 root ossec 2040 abr 29 12:11 0100-solaris_bsm_rules.xml -rw-r-----. 1 root ossec 5237 abr 29 12:11 0105-asterisk_rules.xml -rw-r-----. 1 root ossec 13536 abr 29 12:11 0110-ms_dhcp_rules.xml -rw-r-----. 1 root ossec 2859 abr 29 12:11 0115-arpwatch_rules.xml -rw-r-----. 1 root ossec 1221 abr 29 12:11 0120-symantec-av_rules.xml -rw-r-----. 1 root ossec 1675 abr 29 12:11 0125-symantec-ws_rules.xml -rw-r-----. 1 root ossec 1630 abr 29 12:11 0130-trend-osce_rules.xml -rw-r-----. 1 root ossec 2493 abr 29 12:11 0135-hordeimp_rules.xml -rw-r-----. 1 root ossec 1576 abr 29 12:11 0140-roundcube_rules.xml -rw-r-----. 1 root ossec 2236 abr 29 12:11 0145-wordpress_rules.xml -rw-r-----. 1 root ossec 1164 abr 29 12:11 0150-cimserver_rules.xml -rw-r-----. 1 root ossec 3771 abr 29 12:11 0155-dovecot_rules.xml -rw-r-----. 1 root ossec 1217 abr 29 12:11 0160-vmpop3d_rules.xml -rw-r-----. 1 root ossec 3051 abr 29 12:11 0165-vpopmail_rules.xml -rw-r-----. 1 root ossec 4036 abr 29 12:11 0170-ftpd_rules.xml -rw-r-----. 1 root ossec 8018 abr 29 12:11 0175-proftpd_rules.xml -rw-r-----. 1 root ossec 3411 abr 29 12:11 0180-pure-ftpd_rules.xml -rw-r-----. 1 root ossec 2331 abr 29 12:11 0185-vsftpd_rules.xml -rw-r-----. 1 root ossec 2556 abr 29 12:11 0190-ms_ftpd_rules.xml -rw-r-----. 1 root ossec 12277 abr 29 12:11 0195-named_rules.xml -rw-r-----. 1 root ossec 3166 abr 29 12:11 0200-smbd_rules.xml -rw-r-----. 1 root ossec 2497 abr 29 12:11 0205-racoon_rules.xml -rw-r-----. 1 root ossec 1897 abr 29 12:11 0210-vpn_concentrator_rules.xml -rw-r-----. 1 root ossec 1043 abr 29 12:11 0215-policy_rules.xml -rw-r-----. 1 root ossec 54994 abr 29 12:11 0220-msauth_rules.xml -rw-r-----. 1 root ossec 6045 abr 29 12:11 0225-mcafee_av_rules.xml -rw-r-----. 1 root ossec 5877 abr 29 12:11 0230-ms-se_rules.xml -rw-r-----. 1 root ossec 5652 abr 29 12:11 0235-vmware_rules.xml -rw-r-----. 1 root ossec 3446 abr 29 12:11 0240-ids_rules.xml -rw-r-----. 1 root ossec 11399 abr 29 12:11 0245-web_rules.xml -rw-r-----. 1 root ossec 13066 abr 29 12:11 0250-apache_rules.xml -rw-r-----. 1 root ossec 2117 abr 29 12:11 0255-zeus_rules.xml -rw-r-----. 1 root ossec 4537 abr 29 12:11 0260-nginx_rules.xml -rw-r-----. 1 root ossec 3984 abr 29 12:11 0265-php_rules.xml -rw-r-----. 1 root ossec 8065 abr 29 12:11 0270-web_appsec_rules.xml -rw-r-----. 1 root ossec 8548 abr 29 12:11 0275-squid_rules.xml -rw-r-----. 1 root ossec 5548 abr 29 12:11 0280-attack_rules.xml -rw-r-----. 1 root ossec 1657 abr 29 12:11 0285-systemd_rules.xml -rw-r-----. 1 root ossec 1040 abr 29 12:11 0290-firewalld_rules.xml -rw-r-----. 1 root ossec 3160 abr 29 12:11 0295-mysql_rules.xml -rw-r-----. 1 root ossec 3469 abr 29 12:11 0300-postgresql_rules.xml -rw-r-----. 1 root ossec 4891 abr 29 12:11 0305-dropbear_rules.xml -rw-r-----. 1 root ossec 10109 abr 29 12:11 0310-openbsd_rules.xml -rw-r-----. 1 root ossec 1410 abr 29 12:11 0315-apparmor_rules.xml -rw-r-----. 1 root ossec 3054 abr 29 12:11 0320-clam_av_rules.xml -rw-r-----. 1 root ossec 2175 abr 29 12:11 0325-opensmtpd_rules.xml -rw-r-----. 1 root ossec 11176 abr 29 12:11 0330-sysmon_rules.xml -rw-r-----. 1 root ossec 1282 abr 29 12:11 0335-unbound_rules.xml -rw-r-----. 1 root ossec 19134 abr 29 12:11 0340-puppet_rules.xml -rw-r-----. 1 root ossec 21878 abr 29 12:11 0345-netscaler_rules.xml -rw-r-----. 1 root ossec 17169 abr 29 12:11 0350-amazon_rules.xml -rw-r-----. 1 root ossec 10563 abr 29 12:11 0360-serv-u_rules.xml -rw-r-----. 1 root ossec 18339 abr 29 12:11 0365-auditd_rules.xml -rw-r-----. 1 root ossec 933 abr 29 12:11 0375-usb_rules.xml -rw-r-----. 1 root ossec 2162 abr 29 12:11 0380-redis_rules.xml -rw-r-----. 1 root ossec 29932 abr 29 12:11 0385-oscap_rules.xml -rw-r-----. 1 root ossec 16529 abr 29 12:11 0390-fortigate_rules.xml -rw-r-----. 1 root ossec 4798 abr 29 12:11 0395-hp_rules.xml -rw-r-----. 1 root ossec 2193 abr 29 12:11 0400-openvpn_rules.xml -rw-r-----. 1 root ossec 1859 abr 29 12:11 0405-rsa-auth-manager_rules.xml -rw-r-----. 1 root ossec 586 abr 29 12:11 0410-imperva_rules.xml -rw-r-----. 1 root ossec 1377 abr 29 12:11 0415-sophos_rules.xml -rw-r-----. 1 root ossec 1202 abr 29 12:11 0420-freeipa_rules.xml -rw-r-----. 1 root ossec 2299 abr 29 12:11 0425-cisco-estreamer_rules.xml -rw-r-----. 1 root ossec 3241 abr 29 12:11 0430-ms_wdefender_rules.xml -rw-r-----. 1 root ossec 2506 abr 29 12:11 0435-ms_logs_rules.xml -rw-r-----. 1 root ossec 4699 abr 29 12:11 0440-ms_sqlserver_rules.xml -rw-r-----. 1 root ossec 1463 abr 29 12:11 0445-identity_guard_rules.xml -rw-r-----. 1 root ossec 5002 abr 29 12:11 0450-mongodb_rules.xml -rw-r-----. 1 root ossec 4230 abr 29 12:11 0455-docker_rules.xml -rw-r-----. 1 root ossec 1959 abr 29 12:11 0460-jenkins_rules.xml -rw-r-----. 1 root ossec 2373 abr 29 12:11 0470-vshell_rules.xml -rw-r-----. 1 root ossec 1921 abr 29 12:11 0475-suricata_rules.xml -rw-r-----. 1 root ossec 3990 abr 29 12:11 0480-qualysguard_rules.xml -rw-r-----. 1 root ossec 5956 abr 29 12:11 0485-cylance_rules.xml -rw-r-----. 1 root ossec 1988 abr 29 12:11 0490-virustotal_rules.xml -rw-r-----. 1 root ossec 1478 abr 29 12:11 0495-proxmox-ve_rules.xml -rw-r-----. 1 root ossec 6876 abr 29 12:11 0500-owncloud_rules.xml -rw-r-----. 1 root ossec 5419 abr 29 12:11 0505-vuls_rules.xml -rw-r-----. 1 root ossec 8882 abr 29 12:11 0510-ciscat_rules.xml -rw-r-----. 1 root ossec 1814 abr 29 12:11 0515-exim_rules.xml -rw-r-----. 1 root ossec 2649 abr 29 12:11 0520-vulnerability-detector_rules.xml -rw-r-----. 1 root ossec 3793 abr 29 12:11 0525-openvas_rules.xml -rw-r-----. 1 root ossec 7640 abr 29 12:11 0530-mysql_audit_rules.xml -rw-r-----. 1 root ossec 934 abr 29 12:11 0535-mariadb_rules.xml -rw-r-----. 1 root ossec 1185 abr 29 12:11 0540-pfsense_rules.xml -rw-r-----. 1 root ossec 60378 abr 29 12:11 0545-osquery_rules.xml -rw-r-----. 1 root ossec 442 abr 29 12:11 0550-kaspersky_rules.xml -rw-r-----. 1 root ossec 7148 abr 29 12:11 0555-azure_rules.xml -rw-r-----. 1 root ossec 19078 abr 29 12:11 0560-docker_integration_rules.xml -rw-r-----. 1 root ossec 4162 abr 29 12:11 0565-ms_ipsec_rules.xml -rw-r-----. 1 root ossec 4268 abr 29 12:11 0570-sca_rules.xml -rw-r-----. 1 root ossec 4784 abr 29 12:11 0575-win-base_rules.xml -rw-r-----. 1 root ossec 55018 abr 29 12:11 0580-win-security_rules.xml -rw-r-----. 1 root ossec 117369 abr 29 12:11 0585-win-application_rules.xml -rw-r-----. 1 root ossec 10877 abr 29 12:11 0590-win-system_rules.xml -rw-r-----. 1 root ossec 12606 abr 29 12:11 0595-win-sysmon_rules.xml -rw-r-----. 1 root ossec 5050 abr 29 12:11 0600-win-wdefender_rules.xml -rw-r-----. 1 root ossec 14919 abr 29 12:11 0601-win-vipre_rules.xml -rw-r-----. 1 root ossec 9322 abr 29 12:11 0602-win-wfirewall_rules.xml -rw-r-----. 1 root ossec 7471 abr 29 12:11 0605-win-mcafee_rules.xml -rw-r-----. 1 root ossec 5030 abr 29 12:11 0610-win-ms_logs_rules.xml -rw-r-----. 1 root ossec 6804 abr 29 12:11 0615-win-ms-se_rules.xml -rw-r-----. 1 root ossec 4357 abr 29 12:11 0620-win-generic_rules.xml -rw-r-----. 1 root ossec 8140 abr 29 12:11 0625-cisco-asa_rules.xml -rw-r-----. 1 root ossec 545 abr 29 12:11 0625-mcafee_epo_rules.xml -rw-r-----. 1 root ossec 8961 abr 29 12:11 0630-nextcloud_rules.xml -rw-r-----. 1 root ossec 782 abr 29 12:11 0635-owlh-zeek_rules.xml -rw-r-----. 1 root ossec 848 abr 29 12:11 0640-junos_rules.xml -rw-r-----. 1 root ossec 1969 abr 29 12:11 0675-panda-paps_rules.xml -rw-r-----. 1 root ossec 4395 abr 29 12:11 0680-checkpoint-smart1_rules.xm```
Hi @tXambe ,
You have some rules files duplicated, why are these 4 files in /var/ossec/etc/rules/
?
-rw-rw----. 1 ossec ossec 17490 dic 17 08:37 fortigate_rules.xml
-rw-rw----. 1 ossec ossec 1687 dic 17 08:37 pfsense_rules.xml
-rw-rw----. 1 ossec ossec 218 nov 12 2019 strongswan_rules.xml
-rw-rw----. 1 ossec ossec 3383 dic 17 08:39 watchguard_rules.xml
did you modify them? If you didn't modify these files, you should delete them as they are duplicated
Hello, I don't modified nothing of this files, os ¿ delete al files xml of the rules or the folder rules?
Hi,
Instead of deleting these files, move them to a new directory in your Desktop (for example) just in case you need to recover them: in /var/ossec/etc/rules/
-rw-rw----. 1 ossec ossec 17490 dic 17 08:37 fortigate_rules.xml
-rw-rw----. 1 ossec ossec 1687 dic 17 08:37 pfsense_rules.xml
-rw-rw----. 1 ossec ossec 3383 dic 17 08:39 watchguard_rules.xml
Regarding these other 2 files in that directory, you don't need to remove/move them as they are not duplicated, (I'm assuming these files are correct)
-rw-rw----. 1 ossec ossec 1017 dic 16 13:26 local_rules.xml
-rw-rw----. 1 ossec ossec 218 nov 12 2019 strongswan_rules.xml
Hello, The service is up and I have new error, inside the ELK ( I don't update nothing more ) when I go at wazuh:
Health Check. 3002 - Invalid 'wazuh-app-version' header. Expected version '3.12.x', and found '3.10.x'. (/api/check-stored-api)
cat /etc/ossec-init.conf
DIRECTORY="/var/ossec"
NAME="Wazuh"
VERSION="v3.12.3"
REVISION="31209"
DATE="Wed Apr 29 10:08:41 UTC 2020"
TYPE="server"
cat /usr/share/kibana/plugins/wazuh/package.json | grep version
"version": "3.10.2",
"version": "7.4.2"
Excuse me but the admin of this server was a colleague who is no longer with the company and he was who who installed everything for this reason I can't give you more precise answers, I never install and configured ELK, wazuh etc.
Thanks very much and a greeting
Hi @tXambe,
Don't worry! That error means that Wazuh app is not updated, you need to install the same Wazuh App version as your Wazuh manager (3.12.3 in your case) Did you upgrade Elasticsearch/Kibana? For the 3.12.3 version, these are the available packages
Wazuh app version | Kibana version | Package |
---|---|---|
3.12.3 | 7.7.0 | https://packages.wazuh.com/wazuhapp/wazuhapp-3.12.3_7.7.0.zip |
3.12.3 | 7.6.2 | https://packages.wazuh.com/wazuhapp/wazuhapp-3.12.3_7.6.2.zip |
3.12.3 | 7.6.1 | https://packages.wazuh.com/wazuhapp/wazuhapp-3.12.3_7.6.1.zip |
3.12.3 | 6.8.8 | https://packages.wazuh.com/wazuhapp/wazuhapp-3.12.3_6.8.8.zip |
So you will need to update the ELK to any of this version (I recommend using the latest versions 7.7.0 or 7.6.2), you can find a guide here https://documentation.wazuh.com/3.12/upgrade-guide/upgrading-elastic-stack/elastic_server_minor_upgrade.html Once Kibana, Elasticsearch and Filebeat are updated, you will need to reinstall the Wazuh App as explained here: https://github.com/wazuh/wazuh-kibana-app/tree/master#upgrade
Hello,
When execute the first step as set out in the manual curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
Doesn't do anything. Just shows up:
>
A greeting
Hi, you have to copy the full command not just the first line
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d' { "persistent": { "cluster.routing.allocation.enable": "primaries" } } '
Hello,
[@spro ~]# curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d' { "persistent": { "cluster.routing.allocation.enable": "primaries" } } '
curl: (7) Failed to connect to ::1: No existe ninguna ruta hasta el `host'
[@spro ~]# curl -k -X PUT "https://1.1.1.1:9200/_cluster/settings" -H 'Content-Type: application/json' -d' { "persistent": { "cluster.routing.allocation.enable": "primaries" } } '
curl: (7) Failed connect to 1.1.1.1:9200; Expiró el tiempo de conexión
Hi @tXambe ,
You have to specify the IP of the machine where Elasticsearch is installed
curl -k -X PUT "https://<ELASTICSEARCH_SERVER_IP>:9200/_cluster/settings" -H 'Content-Type: application/json' -d' { "persistent": { "cluster.routing.allocation.enable": "primaries" } } '
Hello, After update Elasticsearch, kibana, filebeat and wazuh , the login panel of kibana don't work ( Kibana server is not ready yet) the services of kibana and elastic are up:
sudo systemctl status -l kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since lun 2020-05-25 16:50:02 CEST; 1min 28s ago
Main PID: 5512 (node)
Tasks: 11
Memory: 718.9M
CGroup: /system.slice/kibana.service
└─5512 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
may 25 16:51:26 siemgfipro kibana[5512]: {"type":"log","@timestamp":"2020-05-25T14:51:26Z","tags":["info","plugins","metrics"],"pid":5512,"message":"Stopping plugin"}
may 25 16:51:26 siemgfipro kibana[5512]: {"type":"log","@timestamp":"2020-05-25T14:51:26Z","tags":["info","plugins","usageCollection"],"pid":5512,"message":"Stopping plugin"}
may 25 16:51:26 siemgfipro kibana[5512]: {"type":"log","@timestamp":"2020-05-25T14:51:26Z","tags":["info","plugins","visTypeVega"],"pid":5512,"message":"Stopping plugin"}
may 25 16:51:26 siemgfipro kibana[5512]: {"type":"log","@timestamp":"2020-05-25T14:51:26Z","tags":["info","plugins","code"],"pid":5512,"message":"Stopping plugin"}
may 25 16:51:26 siemgfipro kibana[5512]: {"type":"log","@timestamp":"2020-05-25T14:51:26Z","tags":["info","plugins","encryptedSavedObjects"],"pid":5512,"message":"Stopping plugin"}
may 25 16:51:26 siemgfipro kibana[5512]: {"type":"log","@timestamp":"2020-05-25T14:51:26Z","tags":["info","plugins","eventLog"],"pid":5512,"message":"Stopping plugin"}
may 25 16:51:26 siemgfipro kibana[5512]: {"type":"log","@timestamp":"2020-05-25T14:51:26Z","tags":["info","plugins","licensing"],"pid":5512,"message":"Stopping plugin"}
may 25 16:51:26 siemgfipro kibana[5512]: {"type":"log","@timestamp":"2020-05-25T14:51:26Z","tags":["info","plugins","siem"],"pid":5512,"message":"Stopping plugin"}
may 25 16:51:26 siemgfipro kibana[5512]: {"type":"log","@timestamp":"2020-05-25T14:51:26Z","tags":["info","plugins","taskManager"],"pid":5512,"message":"Stopping plugin"}
may 25 16:51:26 siemgfipro kibana[5512]: FATAL [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1041167752/992.9mb], which is larger than the limit of [1020054732/972.7mb], real usage: [1041167752/992.9mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=164517332/156.8mb], with { bytes_wanted=1041167752 & bytes_limit=1020054732 & durability="PERMANENT" } :: {"path":"/.kibana","query":{},"statusCode":429,"response":"{\"error\":{\"root_cause\":[{\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [1041167752/992.9mb], which is larger than the limit of [1020054732/972.7mb], real usage: [1041167752/992.9mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=164517332/156.8mb]\",\"bytes_wanted\":1041167752,\"bytes_limit\":1020054732,\"durability\":\"PERMANENT\"}],\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [1041167752/992.9mb], which is larger than the limit of [1020054732/972.7mb], real usage: [1041167752/992.9mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=164517332/156.8mb]\",\"bytes_wanted\":1041167752,\"bytes_limit\":1020054732,\"durability\":\"PERMANENT\"},\"status\":429}"}
sudo systemctl status -l elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/elasticsearch.service.d
└─elasticsearch.conf
Active: active (running) since lun 2020-05-25 16:40:34 CEST; 14min ago
Docs: https://www.elastic.co
Main PID: 4632 (java)
Tasks: 125
Memory: 11.5G
CGroup: /system.slice/elasticsearch.service
├─4632 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -XX:+ShowCodeDetailsInExceptionMessages -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=SPI,COMPAT -Xms1g -Xmx1g -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-6582355286981897657 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Djava.locale.providers=COMPAT -XX:MaxDirectMemorySize=536870912 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -Des.bundled_jdk=true -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
└─4836 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
sudo journalctl -xe
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","data"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","visualizations"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","expressions"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","bfetch"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","share"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","rollup"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","translations"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","apm_oss"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","kibanaLegacy"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","features"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","timelion"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","telemetryCollectionXpack"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","telemetry"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","telemetryCollectionManager"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","lens"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","ossTelemetry"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","metrics"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","usageCollection"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","visTypeVega"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","code"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","encryptedSavedObjects"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","eventLog"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","licensing"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","siem"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: {"type":"log","@timestamp":"2020-05-25T16:58:35Z","tags":["info","plugins","taskManager"],"pid":16760,"message":"Stopping plugin"}
may 25 18:58:35 spro kibana[16760]: FATAL [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1066441328/1017mb], which is larger than the limit of [1020054732/9
may 25 18:58:35 spro sudo[16794]: pam_unix(sudo:session): session closed for user root
may 25 18:58:42 spro systemd[1]: kibana.service: main process exited, code=exited, status=1/FAILURE
may 25 18:58:42 spro systemd[1]: Unit kibana.service entered failed state.
may 25 18:58:42 spro systemd[1]: kibana.service failed.
may 25 18:58:45 spro systemd[1]: kibana.service holdoff time over, scheduling restart.
may 25 18:58:45 spro systemd[1]: Stopped Kibana.
-- Subject: Unit kibana.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kibana.service has finished shutting down.
may 25 18:58:45 spro systemd[1]: Started Kibana.
-- Subject: Unit kibana.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kibana.service has finished starting up.
--
-- The start-up result is done.
may 25 18:59:00 spro filebeat[12780]: 2020-05-25T18:59:00.536+0200 ERROR [publisher_pipeline_output] pipeline/output.go:106 Failed to connect to backoff(elasticsearch(https
may 25 18:59:00 spro filebeat[12780]: 2020-05-25T18:59:00.536+0200 INFO [publisher_pipeline_output] pipeline/output.go:99 Attempting to reconnect to backoff(elasticsearch(h
may 25 18:59:00 spro filebeat[12780]: 2020-05-25T18:59:00.538+0200 INFO [publisher] pipeline/retry.go:196 retryer: send unwait-signal to consumer
may 25 18:59:00 spro filebeat[12780]: 2020-05-25T18:59:00.538+0200 INFO [publisher] pipeline/retry.go:198 done
may 25 18:59:00 spro filebeat[12780]: 2020-05-25T18:59:00.538+0200 INFO [publisher] pipeline/retry.go:173 retryer: send wait signal to consumer
may 25 18:59:00 spro filebeat[12780]: 2020-05-25T18:59:00.538+0200 INFO [publisher] pipeline/retry.go:175 done
may 25 18:59:03 spro filebeat[12780]: 2020-05-25T18:59:03.214+0200 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"bea
may 25 18:59:07 spro kibana[16873]: {"type":"log","@timestamp":"2020-05-25T16:59:07Z","tags":["warning","plugins-discovery"],"pid":16873,"message":"Expect plugin \"id\" in camelCase, but found: apm_os
may 25 18:59:07 spro kibana[16873]: {"type":"log","@timestamp":"2020-05-25T16:59:07Z","tags":["warning","plugins-discovery"],"pid":16873,"message":"Expect plugin \"id\" in camelCase, but found: file_u
may 25 18:59:07 spro kibana[16873]: {"type":"log","@timestamp":"2020-05-25T16:59:07Z","tags":["warning","plugins-discovery"],"pid":16873,"message":"Expect plugin \"id\" in camelCase, but found: trigge
may 25 18:59:33 spro filebeat[12780]: 2020-05-25T18:59:33.212+0200 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"bea
Hi @tXambe ,
Kibana logs:
FATAL [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1066441328/1017mb], which is larger than the limit of [1020054732/9...
That error indicates that you need to reserve more space for the Java machine.
By default is defined in the jvm.options file that is located in /etc/elasticsearch/
, you must increase the default number of 1 GB of heap to a higher number.
There are two rules to apply when setting the Elasticsearch heap size:
Regarding this error in Filebeat:
may 25 18:59:00 spro filebeat[12780]: 2020-05-25T18:59:00.536+0200 ERROR [publisher_pipeline_output] pipeline/output.go:106 Failed to connect to backoff(elasticsearch(https
It looks that Filebeat couldn't connect to Elasticsearc, please run this command to check if the error persists:
filebeat test output
Regards, Pablo Torres
Hello,
I increase the number ram restart the service of elasticsearch and kibana but same error Kibana server is not ready yet
filebeat test output
[spro elasticsearch]# filebeat test output
elasticsearch: https://1.1.1.1:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 1.1.1.1
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... ERROR x509: certificate signed by unknown authority
systemctl status kibana -l | grep -i -E "(error|warning)"
may 26 09:19:51 spro kibana[32122]: {"type":"log","@timestamp":"2020-05-26T07:19:51Z","tags":["warning","savedobjects-service"],"pid":32122,"message":"Unable to connect to Elasticsearch. Error: [search_phase_execution_exception] all shards failed"}
may 26 09:20:53 spro kibana[32122]: {"type":"log","@timestamp":"2020-05-26T07:20:53Z","tags":["warning","savedobjects-service"],"pid":32122,"message":"Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms"}
may 26 09:20:56 spro kibana[32122]: {"type":"log","@timestamp":"2020-05-26T07:20:56Z","tags":["warning","savedobjects-service"],"pid":32122,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_3/Fx8Qxiy4RsuMekBBmW3wtw] already exists, with { index_uuid=\"Fx8Qxiy4RsuMekBBmW3wtw\" & index=\".kibana_3\" }"}
may 26 09:20:56 spro kibana[32122]: {"type":"log","@timestamp":"2020-05-26T07:20:56Z","tags":["warning","savedobjects-service"],"pid":32122,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_3 and restarting Kibana."}
curl -u adme -k "https://1.1.1.1:9200/_cat/indices?v" | grep kibana
Enter host password for user 'elastic':
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
69 23622 69 16301 0 0 489 green open .kibana_task_manager_2 -hnFrq8MRFepqEznTEJ1Tg 1 0 2 0 7.1kb 7.1kb
0 green open .kibana_task_manager_1 solGy8HERJuEudGz0DeuVA 1 0 2 0 12.8kb 12.8kb
100 23622 100 23622 0 0 709 0 0:00:33 0:00:33 --:--:-- 6627
green open .kibana_2 fU8gme9nS2KLdGNXCZkTEw 1 0 82 5 205kb 205kb
green open .kibana_1 OeD9U6lBR-ORoWzrdp4hlA 1 0 20 3 103kb 103kb
green open .kibana_3 Fx8Qxiy4RsuMekBBmW3wtw 1 0 0 0 208b 208b
curl -u kibana -k https://1.1.1.1:9200/?pretty
Enter host password for user 'kibana':
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "failed to authenticate user [kibana]",
"header" : {
"WWW-Authenticate" : [
"Bearer realm=\"security\"",
"ApiKey",
"Basic realm=\"security\" charset=\"UTF-8\""
]
}
}
],
"type" : "security_exception",
"reason" : "failed to authenticate user [kibana]",
"header" : {
"WWW-Authenticate" : [
"Bearer realm=\"security\"",
"ApiKey",
"Basic realm=\"security\" charset=\"UTF-8\""
]
}
},
"status" : 401
systemctl status -l kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since mar 2020-05-26 10:37:22 CEST; 3min 44s ago
Main PID: 4094 (node)
Tasks: 11
Memory: 462.7M
CGroup: /system.slice/kibana.service
└─4094 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
may 26 10:39:06 siemgfipro kibana[4094]: {"type":"log","@timestamp":"2020-05-26T08:39:06Z","tags":["warning","plugins","alerting","plugins","alerting"],"pid":4094,"message":"APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml."}
may 26 10:39:07 siemgfipro kibana[4094]: {"type":"log","@timestamp":"2020-05-26T08:39:07Z","tags":["info","plugins","monitoring","monitoring"],"pid":4094,"message":"config sourced from: production cluster"}
may 26 10:39:07 siemgfipro kibana[4094]: {"type":"log","@timestamp":"2020-05-26T08:39:07Z","tags":["warning","plugins","monitoring","monitoring"],"pid":4094,"message":"X-Pack Monitoring Cluster Alerts will not be available: undefined"}
may 26 10:39:07 siemgfipro kibana[4094]: {"type":"log","@timestamp":"2020-05-26T08:39:07Z","tags":["info","savedobjects-service"],"pid":4094,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
may 26 10:39:07 siemgfipro kibana[4094]: {"type":"log","@timestamp":"2020-05-26T08:39:07Z","tags":["info","plugins","watcher"],"pid":4094,"message":"Your basic license does not support watcher. Please upgrade your license."}
may 26 10:39:07 siemgfipro kibana[4094]: {"type":"log","@timestamp":"2020-05-26T08:39:07Z","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":4094,"message":"Starting monitoring stats collection"}
may 26 10:39:07 siemgfipro kibana[4094]: {"type":"log","@timestamp":"2020-05-26T08:39:07Z","tags":["info","savedobjects-service"],"pid":4094,"message":"Starting saved objects migrations"}
may 26 10:39:08 siemgfipro kibana[4094]: {"type":"log","@timestamp":"2020-05-26T08:39:08Z","tags":["info","savedobjects-service"],"pid":4094,"message":"Creating index .kibana_3."}
may 26 10:39:08 siemgfipro kibana[4094]: {"type":"log","@timestamp":"2020-05-26T08:39:08Z","tags":["warning","savedobjects-service"],"pid":4094,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_3/Fx8Qxiy4RsuMekBBmW3wtw] already exists, with { index_uuid=\"Fx8Qxiy4RsuMekBBmW3wtw\" & index=\".kibana_3\" }"}
may 26 10:39:08 siemgfipro kibana[4094]: {"type":"log","@timestamp":"2020-05-26T08:39:08Z","tags":["warning","savedobjects-service"],"pid":4094,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_3 and restarting Kibana."}
Hi @tXambe,
handshake... ERROR x509: certificate signed by unknown authority
Did you configure the output.elasticsearch.ssl.certificate_authorities
setting in your filebeat.yml? More info can be found here: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-ssl.html
As a temporary solution, you could set output.elasticsearch.ssl.verification_mode: none
this is NOT recommended on production servers as it will disable many of the benefits of using SSL.
Kibana
may 26 09:20:56 spro kibana[32122]: {"type":"log","@timestamp":"2020-05-26T07:20:56Z","tags":["warning","savedobjects-service"],"pid":32122,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_3 and restarting Kibana."}
You need to remove the .kibana_3
index:
curl -u USERNAME -X DELETE -k "https://1.1.1.1:9200/.kibana_3"
"type" : "security_exception",
"reason" : "failed to authenticate user [kibana]",
That means that the credentials of the user kibana
are wrong so the request against Elasticsearch can't be authenticated.
Hello,
Kibana is live, the web interface is up, but the dashboard is empty,I don't know if we have to wait a long time until the alerts inside the dasboard.
Hi @tXambe,
Great! Filebeat is in charge of reading alerts and shipping these alerts into Elasticsearch, please check its status:
systemctl status filebeat
and
filebeat test output
Hello,
I don't understand nothing, just now I see the alerts from 14 days ago but of yesterday and today not
systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: active (running) since mar 2020-05-26 09:12:44 CEST; 3h 31min ago
Docs: https://www.elastic.co/products/beats/filebeat
Main PID: 32095 (filebeat)
Tasks: 13
Memory: 36.8M
CGroup: /system.slice/filebeat.service
└─32095 /usr/share/filebeat/bin/filebeat -environment systemd -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /v...
may 26 12:42:24 spro filebeat[32095]: 2020-05-26T12:42:24.683+0200 INFO [publisher] pipeline/retry.go:175 done
may 26 12:42:46 spro filebeat[32095]: 2020-05-26T12:42:46.519+0200 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"...
may 26 12:43:11 spro filebeat[32095]: 2020-05-26T12:43:11.541+0200 ERROR [publisher_pipeline_output] pipeline/output.go:106 Failed to connect to backoff(el...nown authority
may 26 12:43:11 spro filebeat[32095]: 2020-05-26T12:43:11.541+0200 INFO [publisher_pipeline_output] pipeline/output.go:99 Attempting to reconnect to backof...ect attempt(s)
may 26 12:43:11 spro filebeat[32095]: 2020-05-26T12:43:11.543+0200 INFO [publisher] pipeline/retry.go:196 retryer: send unwait-signal to consumer
may 26 12:43:11 spro filebeat[32095]: 2020-05-26T12:43:11.543+0200 INFO [publisher] pipeline/retry.go:198 done
may 26 12:43:11 spro filebeat[32095]: 2020-05-26T12:43:11.543+0200 INFO [publisher] pipeline/retry.go:173 retryer: send wait signal to consumer
may 26 12:43:11 spro filebeat[32095]: 2020-05-26T12:43:11.543+0200 INFO [publisher] pipeline/retry.go:175 done
may 26 12:43:16 spro filebeat[32095]: 2020-05-26T12:43:16.520+0200 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"...
may 26 12:43:46 spro filebeat[32095]: 2020-05-26T12:43:46.519+0200 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"...
Hint: Some lines were ellipsized, use -l to show in full.
[@spro bin]# filebeat test output
elasticsearch: https://1.1.1.1:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 1.1.1.1
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... ERROR x509: certificate signed by unknown authority
systemctl status -l filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: active (running) since mar 2020-05-26 13:00:18 CEST; 16s ago
Docs: https://www.elastic.co/products/beats/filebeat
Main PID: 12037 (filebeat)
Tasks: 13
Memory: 29.2M
CGroup: /system.slice/filebeat.service
└─12037 /usr/share/filebeat/bin/filebeat -environment systemd -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat
may 26 13:00:25 spro filebeat[12037]: 2020-05-26T13:00:25.580+0200 ERROR [publisher_pipeline_output] pipeline/output.go:106 Failed to connect to backoff(elasticsearch(https://1.1.1.1:9200)): Get https://1.1.1.1:9200: x509: certificate signed by unknown authority
may 26 13:00:25 spro filebeat[12037]: 2020-05-26T13:00:25.580+0200 INFO [publisher_pipeline_output] pipeline/output.go:99 Attempting to reconnect to backoff(elasticsearch(https://1.1.1.1:9200)) with 2 reconnect attempt(s)
may 26 13:00:25 spro filebeat[12037]: 2020-05-26T13:00:25.581+0200 INFO [publisher] pipeline/retry.go:173 retryer: send wait signal to consumer
may 26 13:00:25 spro filebeat[12037]: 2020-05-26T13:00:25.581+0200 INFO [publisher] pipeline/retry.go:175 done
may 26 13:00:31 spro filebeat[12037]: 2020-05-26T13:00:31.505+0200 ERROR [publisher_pipeline_output] pipeline/output.go:106 Failed to connect to backoff(elasticsearch(https://1.1.1.1:9200)): Get https://1.1.1.1:9200: x509: certificate signed by unknown authority
may 26 13:00:31 spro filebeat[12037]: 2020-05-26T13:00:31.506+0200 INFO [publisher_pipeline_output] pipeline/output.go:99 Attempting to reconnect to backoff(elasticsearch(https://1.1.1.1:9200)) with 3 reconnect attempt(s)
may 26 13:00:31 spro filebeat[12037]: 2020-05-26T13:00:31.506+0200 INFO [publisher] pipeline/retry.go:196 retryer: send unwait-signal to consumer
may 26 13:00:31 spro filebeat[12037]: 2020-05-26T13:00:31.506+0200 INFO [publisher] pipeline/retry.go:198 done
may 26 13:00:31 spro filebeat[12037]: 2020-05-26T13:00:31.506+0200 INFO [publisher] pipeline/retry.go:173 retryer: send wait signal to consumer
may 26 13:00:31 spro filebeat[12037]: 2020-05-26T13:00:31.506+0200 INFO [publisher] pipeline/retry.go:175 done
Hi @tXambe,
Let me briefly explain you the data flow: Wazuh basically generates alerts depending on its configuration e.g when a file has been modified (you can see a full list of Wazuh capabilities here: https://documentation.wazuh.com/3.12/user-manual/reference/ossec-conf/)
When an alert is generated, it is stored in this file: /var/ossec/logs/alerts/alerts.json
.
Filebeat monitors log files or locations that you specify, following our documentation you can configure it to read logs/alerts from the alerts.json
file, and forwards them to Elasticsearch.
So every alert generated by Wazuh will be read by Filebeat and forwarded into Elasticsearch.
Wazuh -> Filebeat -> Elasticsearch
So the reason that you can see alerts of 14days ago but you can't see recent alerts is because Filebeat can't communicate with Elasticsearch: Take a look at this error log in Filebeat logs:
may 26 13:00:31 spro filebeat[12037]: 2020-05-26T13:00:31.505+0200 ERROR [publisher_pipeline_output] pipeline/output.go:106 Failed to connect to backoff(elasticsearch(https://1.1.1.1:9200)): Get https://1.1.1.1:9200: x509: certificate signed by unknown authority
You need to check the Filebeat configuration, located in /etc/filebeat/filebeat.yml
and as the error mentions you need to specify the output.elasticsearch.ssl.certificate_authorities, more info can be found here: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-ssl.html
As I mentioned in my previous reply:
As a temporary solution, you could set
output.elasticsearch.ssl.verification_mode: none
this is NOT recommended on production servers as it will disable many of the benefits of using SSL.
Hello, I have configured filebeat.yml how do you tell me but same error and I configured output.elasticsearch.ssl.verification_mode: none inside filebeat.yml and same error
cat /etc/filebeat/filebeat.yml
# Wazuh - Filebeat configuration file
filebeat.modules:
- module: wazuh
alerts:
enabled: true
archives:
enabled: false
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.template.overwrite: true
setup.ilm.enabled: false
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/certs/ca/ca.crt"]
output.elasticsearch.hosts: ['https://1.1.1.1:9200']
But follow appears the error x509
systemctl status -l filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: active (running) since mié 2020-05-27 10:30:27 CEST; 53s ago
Docs: https://www.elastic.co/products/beats/filebeat
Main PID: 21058 (filebeat)
Tasks: 14
Memory: 55.4M
CGroup: /system.slice/filebeat.service
└─21058 /usr/share/filebeat/bin/filebeat -environment systemd -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat
may 27 10:30:46 spro filebeat[21058]: 2020-05-27T10:30:46.937+0200 INFO [publisher] pipeline/retry.go:198 done
may 27 10:30:46 spro filebeat[21058]: 2020-05-27T10:30:46.937+0200 INFO [publisher] pipeline/retry.go:173 retryer: send wait signal to consumer
may 27 10:30:46 spro filebeat[21058]: 2020-05-27T10:30:46.937+0200 INFO [publisher] pipeline/retry.go:175 done
may 27 10:30:58 spro filebeat[21058]: 2020-05-27T10:30:58.130+0200 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":540,"time":{"ms":547}},"total":{"ticks":1690,"time":{"ms":1697},"value":1690},"user":{"ticks":1150,"time":{"ms":1150}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":8},"info":{"ephemeral_id":"6ea5de41-83a6-41ba-888f-7c5268642997","uptime":{"ms":30449}},"memstats":{"gc_next":31840528,"memory_alloc":20825368,"memory_total":52479192,"rss":49958912},"runtime":{"goroutines":25}},"filebeat":{"events":{"active":4117,"added":4120,"done":3},"harvester":{"files":{"1e5011d0-ffa4-493d-88e0-859cd7c17008":{"last_event_published_time":"2020-05-27T10:30:29.076Z","last_event_timestamp":"2020-05-27T10:30:29.076Z","name":"/var/ossec/logs/alerts/alerts.json","read_offset":3595290,"size":6738879,"start_time":"2020-05-27T10:30:28.147Z"}},"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":4117,"filtered":3,"published":4116,"retry":100,"total":4120}}},"registrar":{"states":{"cleanup":1,"current":1,"update":3},"writes":{"success":3,"total":3}},"system":{"cpu":{"cores":4},"load":{"1":5.55,"15":4.27,"5":4.4,"norm":{"1":1.3875,"15":1.0675,"5":1.1}}}}}}
may 27 10:31:06 spro filebeat[21058]: 2020-05-27T10:31:06.582+0200 ERROR [publisher_pipeline_output] pipeline/output.go:106 Failed to connect to backoff(elasticsearch(https://1.1.1.1:9200)): Get https://1.1.1.1:9200: x509: certificate signed by unknown authority
may 27 10:31:06 spro filebeat[21058]: 2020-05-27T10:31:06.583+0200 INFO [publisher_pipeline_output] pipeline/output.go:99 Attempting to reconnect to backoff(elasticsearch(https://1.1.1.1:9200)) with 5 reconnect attempt(s)
may 27 10:31:06 spro filebeat[21058]: 2020-05-27T10:31:06.583+0200 INFO [publisher] pipeline/retry.go:196 retryer: send unwait-signal to consumer
may 27 10:31:06 spro filebeat[21058]: 2020-05-27T10:31:06.583+0200 INFO [publisher] pipeline/retry.go:198 done
may 27 10:31:06 spro filebeat[21058]: 2020-05-27T10:31:06.583+0200 INFO [publisher] pipeline/retry.go:173 retryer: send wait signal to consumer
may 27 10:31:06 spro filebeat[21058]: 2020-05-27T10:31:06.583+0200 INFO [publisher] pipeline/retry.go:175 done
openssl s_client -connect 1.1.1.1:9200 -showcerts
CONNECTED(00000003)
depth=0 CN = elasticsearch
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = elasticsearch
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
0 s:/CN=elasticsearch
i:/CN=Elastic Certificate Tool Autogenerated CA
-----BEGIN CERTIFICATE-----
MIIDOTCCAiGgAwIBAgIVAOB55Uhue5pSvt6Go+KacyXgLG8xMA0GCSqGSIb3DQEB
CwUAMDQxMjAwBgNVBAMTKUVsYXN0aWMgQ2VydGlmaWNhdGUgVG9vbCBBdXRvZ2Vu
ZXJhdGVkIENBMB4XDTIwMDEyMTA4NDUzNVoXDTIzMDEyMDA4NDUzNVowGDEWMBQG
A1UEAxMNZWxhc3RpY3NlYXJjaDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
ggEBAIJ2zSy/6Dg3i1X0j8F+5n31WSUOskvXxoNupzjKxiYXOL6ax7yU83eKY0nV
SU7KF2TT+O035hu1HLHvQ4qgYFk7tJtIhH3hgfxv+UUpGzLL917K33HiTW19IGup
lOzW0HH+CA1teef0HKDx5FAZ8cYRIu1OYyMqbiEr9Ub8PDEgPJ4Y3LMwm8U4myxn
iZinP/2zrTpAgnWy1p1JEyTSEOxkVJbHmQEej13XnOQKqyEsEniisR90Q+Wi5mcz
FodW1PLe+zJGsr6XLmbXZ5LeMC9jcT5hJwZUdDnKuA9USXuhB8E8/Y9h6OoL51gO
tA9WHheeWK+oXQxPJNJSgv3snS8CAwEAAaNeMFwwHQYDVR0OBBYEFN04DnjWCvPd
WZZI5HXw+yCnJvV/RLRcG0fL4hLuu/i4K986SlBcvPaMAsRVxb6TEcUg6mz4SAE8
QJ1HQ1o7zId96NHKIhDwTlSknrVjJ3kaRRVt9tyIe0ki94hdqoyMv7DRCJHIq/SH
cAydbQZTcJVNqWRt2S5FOr+D740XA4FT4qDMiCxerADoMCCIbMZumdn79mPup5Qz
1hk0eYERKW+u17Hd6Dl3fF3obodanmNei0HeFoTmljlnLBh1qYrdc6MmcF2sPTpu
w+wiYFl9nVPU4YDIBA==
-----END CERTIFICATE-----
---
Server certificate
subject=/CN=elasticsearch
issuer=/CN=Elastic Certificate Tool Autogenerated CA
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
Hi @tXambe.
Sorry for the late reply, was Elasticsearch/Kibana/Filebeat SSL configured using our documentation? (https://documentation.wazuh.com/3.12/installation-guide/installing-elastic-stack/protect-installation/xpack.html)
In that case, it looks that some configuration is missing in your filebeat.yml
:
output.elasticsearch.protocol: https
output.elasticsearch.ssl.certificate: "/etc/filebeat/certs/wazuh-manager.crt"
output.elasticsearch.ssl.key: "/etc/filebeat/certs/wazuh-manager.key"
output.elasticsearch.ssl.certificate_authorities: ["/etc/filebeat/certs/ca/ca.crt"]
Every time you make any change in the filebeat.yml
you will have to restart the service (systemctl restart filebeatand then check its connection with Elasticsearch:
filebeat test output`
Hello,
I have included this lines inside filebeat.yml and the service did not get up so I have seen that it was because I ran out of disk space due to the amount of index , they have expanded the storage and it's already been up.
The kibana panel is up but I have alerts with No results found. And for example the domain controller of AD was been updated the agent but appears how disconnected and show old version agent ¿Is need restart the server after of update agent?
In the domain controller appears this error:
2020/05/28 12:04:54 ossec-agent: ERROR: (1216): Unable to connect to '1.1.1.1': 'A connection cannot be established as the destination computer expressly denied that connection.'.
2020/05/28 12:05:04 ossec-agent: INFO: Trying to connect to server (1.1.1.1:1514/tcp).
[@spro etc]# netstat -an |grep 1514
udp 0 0 0.0.0.0:1514 0.0.0.0:*
[@spro ossec]# /var/ossec/bin/ossec-remoted -f
[@spro ossec]# 2020/05/28 13:46:24 ossec-remoted: CRITICAL: (1206): Unable to Bind port '1514' due to [(98)-(Address already in use)]
tail -n 300 /var/ossec/logs/ossec.log | grep -i error
2020/05/28 13:21:44 ossec-authd: ERROR: ERROR 9007: Duplicated IP.
2020/05/28 13:21:44 manage_agents: ERROR: ERROR 9007: Duplicated IP
``
¿ How can I check the configuration / index rotation policy so that the disk space problem does not occur again?
Thanks very much and a greeting
Hi @tXambe,
Yes, please restart the components after update is complete. Those logs show some connections errors, make sure that the agent can connect with the Wazuh manager server, check if the port is opened or if there's any issue with the firewall.
¿ How can I check the configuration / index rotation policy so that the disk space problem does not occur again?
I recommend you to use ILM policies (https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html) You can create a specific policy that deletes indices when the are older than 30 days (for example). You can see a full example in our blog: https://wazuh.com/blog/wazuh-index-management/
Let me also mention that we recommend using one issue to solve one specific bug/question, as it is easier for other users to find an answer when they have the same question/problem, if possible please open a new issue explaining your issue and we will be happy to help you! You can also find us in our Slack community channel https://wazuh.com/community/join-us-on-slack/
Best Regards, Pablo Torres
Hello, The port is open and and there have been no changes to the firewall rules and there's so much connection for IP how DNS.
Thanks and a greeting
Hello,
I have this error with this modules
Check Wazuh API connection Check for Wazuh API versión Check Elasticsearch index pattern Ready Check Elasticsearch template Ready Check index pattern known fields Ready
Health Check. 3002 - https://localhost:55000 is unreachable (/api/check-stored-api)
In the "Wazu API Configuration" when check connection I have this message:
Wazuh App: Please set up Wazuh API credentials.
If check the connection I have this error:
Settings. 3004 - Wrong credentials (/api/check-api)
I don't know where can change the password in this panel control, ¿Can anyone help me?
Thanks and a greeting