wazuh / wazuh-qa

Wazuh - Quality Assurance
GNU General Public License v2.0
61 stars 30 forks source link

Functional testing of AIX Manager v4.3.10 #4053

Closed Rebits closed 6 months ago

Rebits commented 1 year ago
Wazuh automation Issue
https://github.com/wazuh/wazuh-automation/issues/1102

We are asked to perform the proposed functional tests for the log data collection and file integrity monitoring module in a 4.3.10 AIX manager.

For each of the proposed architectures and test cases, the following must be done:

Architectures

Tests cases

Logcollector - Different location values: - Common path: `/home/vagrant/test.log` - Wildcard group: `/home/vagrant/file.log-%Y-%m-%d` - Wildcard generic: `/home/vagrant/*.log` - Command monitoring: - Command with arguments: `ls -la /tmp` - Command without arguments: `df` - Command monitoring using the `` option: **Command: `df`** / **Alias: `File system disk space`** - Different `frequency` values: `5`, `10`, `60` - Collect events generated since wazuh-logcollector was stopped disabling the `` option - Different `` values - `syslog` - `json` - Use the `
Syscheck - Basic directory monitoring - Report Changes - Check all - Ignore - Nodiff - Frequency - Scan time - Scan day - Alert new files - Scan on start
Rebits commented 1 year ago

Tester review

Tester PR commit
@Rebits https://github.com/wazuh/wazuh/commit/dd1da6e87b63a703669c50ed418146d63e96cd30

Testing environment

OS OS version Deployment Image/AMI Notes
AIX POWER8 SiteOX

Tested packages

wazuh-manager
Pending

Status

Conclusion :yellow_circle:

:yellow_circle: Some issues not related with the development were detected

Previous conclusion :red_circle: During the testing process, unexpected behaviors related to the development itself have been detected: - [Wazuh manager installation by sources and packages in AIX system produce unexpected errors](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1516623036) - [MITRE database is not working](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1516623449) - [Fluentd `Cannot send message to socket 'fluent_socket' error` in high load environments](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1523608207) - [In the case of configuring the sending of logcollector block messages by fluentd, no alerts will be generated for the events of this same file](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1523613586) :yellow_circle: In addition, some errors, not related to the developments were detected: - [Scan time option is not working as expected](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1516626696) - [Scan on start is always enabled](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1516626109) - [Auto Ignore option is not included in the documentation](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1516625196) - [Alerts new file option is always enabled](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1516624672) - [If the manager is restarted or stopped before the syscheck scheduled scan starts, all the expected alerts during the interval will be lost](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1516627158) - [Scan on start option is not included in the documentation](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1516637505) - [Wazuh local upgrade fails](https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1522096662)
Rebits commented 1 year ago

Testing results :red_circle:

Installation

Installation - Sources :red_circle: We are going to use the branch `16392-aix-build-fix` for source installation. **Installation steps** In order to install the Wazuh local mode, it has been followed these steps: - Install the dependencies: `gmake deps RESOURCES_URL=`http://packages.wazuh.com/deps/17 TARGET=local -j8` - Install the local mode: `gmake TARGET=local USE_SELINUX=no -j8` - Install the local mode: `./install` and select `local` as installation mode - Change the `thread_stack_size` internal option to `2048` - Start the manager: `/var/ossec/bin/wazuh-control restart` *** **Manager compilation** :green_circle: Compilation works as expected. No error was detected > **Note** > During testing, the reported bug https://github.com/wazuh/wazuh/issues/15085 occurs
Full compilation log ``` grep: can't open /etc/os-release grep: can't open /etc/redhat-release gmake build_sysinfo build_shared_modules build_syscollector gmake[1]: Entering directory '/opt/wazuh/src' grep: can't open /etc/os-release grep: can't open /etc/redhat-release cd data_provider/ && mkdir -p build && cd build && cmake -DINSTALL_PREFIX=/var/ossec .. && gmake cd shared_modules/dbsync/ && mkdir -p build && cd build && cmake -DINSTALL_PREFIX=/var/ossec .. && gmake -- Configuring done -- Configuring done -- Generating done -- Build files have been written to: /opt/wazuh/src/data_provider/build -- Generating done -- Build files have been written to: /opt/wazuh/src/shared_modules/dbsync/build gmake[2]: Entering directory '/opt/wazuh/src/data_provider/build' gmake[2]: Entering directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[3]: Entering directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[3]: Entering directory '/opt/wazuh/src/data_provider/build' gmake[4]: Entering directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[4]: Entering directory '/opt/wazuh/src/data_provider/build' gmake[4]: Leaving directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[4]: Leaving directory '/opt/wazuh/src/data_provider/build' [ 66%] Built target sysinfo gmake[4]: Entering directory '/opt/wazuh/src/data_provider/build' [ 60%] Built target dbsync gmake[4]: Entering directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[4]: Entering directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[4]: Leaving directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[4]: Leaving directory '/opt/wazuh/src/data_provider/build' gmake[4]: Leaving directory '/opt/wazuh/src/shared_modules/dbsync/build' [ 80%] Built target dbsync_example [100%] Built target sysinfo_test_tool gmake[3]: Leaving directory '/opt/wazuh/src/data_provider/build' [100%] Built target dbsync_test_tool gmake[3]: Leaving directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[2]: Leaving directory '/opt/wazuh/src/data_provider/build' gmake[2]: Leaving directory '/opt/wazuh/src/shared_modules/dbsync/build' cd shared_modules/rsync/ && mkdir -p build && cd build && cmake -DINSTALL_PREFIX=/var/ossec .. && gmake -- Configuring done -- Generating done -- Build files have been written to: /opt/wazuh/src/shared_modules/rsync/build gmake[2]: Entering directory '/opt/wazuh/src/shared_modules/rsync/build' gmake[3]: Entering directory '/opt/wazuh/src/shared_modules/rsync/build' gmake[4]: Entering directory '/opt/wazuh/src/shared_modules/rsync/build' gmake[4]: Leaving directory '/opt/wazuh/src/shared_modules/rsync/build' [ 37%] Built target rsync gmake[4]: Entering directory '/opt/wazuh/src/shared_modules/rsync/build' gmake[4]: Leaving directory '/opt/wazuh/src/shared_modules/rsync/build' [100%] Built target rsync_test_tool gmake[3]: Leaving directory '/opt/wazuh/src/shared_modules/rsync/build' gmake[2]: Leaving directory '/opt/wazuh/src/shared_modules/rsync/build' cd wazuh_modules/syscollector/ && mkdir -p build && cd build && cmake -DINSTALL_PREFIX=/var/ossec .. && gmake -- Configuring done -- Generating done -- Build files have been written to: /opt/wazuh/src/wazuh_modules/syscollector/build gmake[2]: Entering directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[3]: Entering directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[4]: Entering directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[4]: Leaving directory '/opt/wazuh/src/wazuh_modules/syscollector/build' [ 66%] Built target syscollector gmake[4]: Entering directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[4]: Leaving directory '/opt/wazuh/src/wazuh_modules/syscollector/build' [100%] Built target syscollector_test_tool gmake[3]: Leaving directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[2]: Leaving directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[1]: Leaving directory '/opt/wazuh/src' gmake wazuh-maild - wazuh-csyslogd - wazuh-agentlessd - wazuh-execd - wazuh-logcollector - wazuh-remoted wazuh-agentd manage_agents utils active-responses wazuh-syscheckd wazuh-monitord wazuh-reportd wazuh-authd wazuh-analysisd wazuh-logtest-legacy wazuh-dbd - wazuh-integratord wazuh-modulesd wazuh-db gmake[1]: Entering directory '/opt/wazuh/src' grep: can't open /etc/os-release grep: can't open /etc/redhat-release gmake[1]: 'wazuh-maild' is up to date. gmake[1]: 'wazuh-csyslogd' is up to date. gmake[1]: 'wazuh-agentlessd' is up to date. gmake[1]: 'wazuh-execd' is up to date. gmake[1]: 'wazuh-logcollector' is up to date. gmake[1]: 'wazuh-remoted' is up to date. gmake[1]: 'wazuh-agentd' is up to date. gmake[1]: 'manage_agents' is up to date. gmake[1]: Nothing to be done for 'utils'. gmake[1]: Nothing to be done for 'active-responses'. gmake[1]: 'wazuh-syscheckd' is up to date. gmake[1]: 'wazuh-monitord' is up to date. gmake[1]: 'wazuh-reportd' is up to date. gmake[1]: 'wazuh-authd' is up to date. gmake[1]: 'wazuh-analysisd' is up to date. gmake[1]: 'wazuh-logtest-legacy' is up to date. gmake[1]: 'wazuh-dbd' is up to date. gmake[1]: 'wazuh-integratord' is up to date. gmake[1]: 'wazuh-modulesd' is up to date. gmake[1]: 'wazuh-db' is up to date. gmake[1]: Leaving directory '/opt/wazuh/src' gmake settings gmake[1]: Entering directory '/opt/wazuh/src' grep: can't open /etc/os-release grep: can't open /etc/redhat-release General settings: TARGET: local V: DEBUG: DEBUGAD INSTALLDIR: /var/ossec DATABASE: ONEWAY: no CLEANFULL: no RESOURCES_URL: https://packages.wazuh.com/deps/17 EXTERNAL_SRC_ONLY: User settings: WAZUH_GROUP: wazuh WAZUH_USER: wazuh USE settings: USE_ZEROMQ: no USE_GEOIP: no USE_PRELUDE: no USE_INOTIFY: no USE_BIG_ENDIAN: no USE_SELINUX: no USE_AUDIT: no DISABLE_SYSC: no DISABLE_CISCAT: no Mysql settings: includes: libs: Pgsql settings: includes: libs: Defines: -DOSSECHIDS -DUSER="wazuh" -DGROUPGLOBAL="wazuh" -DAIX -DAIX -D__unix -D_LINUX_SOURCE_COMPAT -DHIGHFIRST -DENABLE_SYSC -DENABLE_CISCAT -DLOCAL Compiler: CFLAGS -pthread -DNDEBUG -O2 -DOSSECHIDS -DUSER="wazuh" -DGROUPGLOBAL="wazuh" -DAIX -DAIX -D__unix -D_LINUX_SOURCE_COMPAT -DHIGHFIRST -DENABLE_SYSC -DENABLE_CISCAT -DLOCAL -pipe -Wall -Wextra -std=gnu99 -I./ -I./headers/ -Iexternal/openssl/include -Iexternal/cJSON/ -Iexternal/libyaml/include -Iexternal/curl/include -Iexternal/msgpack/include -Iexternal/bzip2/ -Ishared_modules/common -Ishared_modules/dbsync/include -Ishared_modules/rsync/include -Iwazuh_modules/syscollector/include -Idata_provider/include -Iexternal/libpcre2/include -I/builddir/output/include LDFLAGS -pthread -L./lib '-Wl,-blibpath:/var/ossec/lib:/usr/lib:/lib' -O2 -Lshared_modules/dbsync/build/lib -Lshared_modules/rsync/build/lib -Lwazuh_modules/syscollector/build/lib -Ldata_provider/build/lib LIBS CC gcc MAKE gmake gmake[1]: Leaving directory '/opt/wazuh/src' Done building local ```
**Wazuh Local installation** :red_circle: Multiple errors were detected during installation: - `install: The -c and -m flags may not be used together` - `install: File root was not found.` - `gmake[1]: *** [Makefile:1313: altbininstall] Error 2` - `gmake: *** [Makefile:31: install] Error 2` - `gmake: *** [Makefile:29: install] Error 2`
Full installation log ``` Wazuh v4.3.11 (Rev. 40324) Installation Script - https://www.wazuh.com You are about to start the installation process of Wazuh. You must have a C compiler pre-installed in your system. - System: AIX vrebollo 1 (AIX 7.1) - User: root - Host: vrebollo -- Press ENTER to continue or Ctrl-C to abort. -- -n 1- What kind of installation do you want (manager, agent, local, hybrid or help)? local - Local installation chosen. -n 2- Choose where to install Wazuh [/var/ossec]: - Installation will be made at /var/ossec . 3- Configuring Wazuh. -n 3.1- Do you want e-mail notification? (y/n) [n]: --- Email notification disabled. -n 3.2- Do you want to run the integrity check daemon? (y/n) [y]: - Running syscheck (integrity check daemon). -n 3.3- Do you want to run the rootkit detection engine? (y/n) [y]: - Running rootcheck (rootkit detection). 3.5- Active response allows you to execute a specific command based on the events received. By default, no active responses are defined. - Default white list for the active response: - 127.0.0.1 - 9.9.9.9 -n - Do you want to add more IPs to the white list? (y/n)? [n]: -n 3.6- Do you want to start Wazuh after the installation? (y/n) [y]: - Wazuh will start at the end of installation. 3.7- Setting the configuration to analyze the following logs: -- /var/ossec/logs/active-responses.log - If you want to monitor any other file, just change the ossec.conf and add a new localfile entry. Any questions about the configuration can be answered by visiting us online at https://documentation.wazuh.com/. --- Press ENTER to continue --- 4- Installing the system DIR="/var/ossec" - Running the Makefile grep: can't open /etc/os-release grep: can't open /etc/redhat-release gmake build_sysinfo build_shared_modules build_syscollector gmake[1]: Entering directory '/opt/wazuh/src' grep: can't open /etc/os-release grep: can't open /etc/redhat-release cd data_provider/ && mkdir -p build && cd build && cmake -DINSTALL_PREFIX=/var/ossec .. && gmake -- Configuring done -- Generating done -- Build files have been written to: /opt/wazuh/src/data_provider/build gmake[2]: Entering directory '/opt/wazuh/src/data_provider/build' gmake[3]: Entering directory '/opt/wazuh/src/data_provider/build' gmake[4]: Entering directory '/opt/wazuh/src/data_provider/build' gmake[4]: Leaving directory '/opt/wazuh/src/data_provider/build' [ 66%] Built target sysinfo gmake[4]: Entering directory '/opt/wazuh/src/data_provider/build' gmake[4]: Leaving directory '/opt/wazuh/src/data_provider/build' [100%] Built target sysinfo_test_tool gmake[3]: Leaving directory '/opt/wazuh/src/data_provider/build' gmake[2]: Leaving directory '/opt/wazuh/src/data_provider/build' cd shared_modules/dbsync/ && mkdir -p build && cd build && cmake -DINSTALL_PREFIX=/var/ossec .. && gmake -- Configuring done -- Generating done -- Build files have been written to: /opt/wazuh/src/shared_modules/dbsync/build gmake[2]: Entering directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[3]: Entering directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[4]: Entering directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[4]: Leaving directory '/opt/wazuh/src/shared_modules/dbsync/build' [ 60%] Built target dbsync gmake[4]: Entering directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[4]: Leaving directory '/opt/wazuh/src/shared_modules/dbsync/build' [ 80%] Built target dbsync_example gmake[4]: Entering directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[4]: Leaving directory '/opt/wazuh/src/shared_modules/dbsync/build' [100%] Built target dbsync_test_tool gmake[3]: Leaving directory '/opt/wazuh/src/shared_modules/dbsync/build' gmake[2]: Leaving directory '/opt/wazuh/src/shared_modules/dbsync/build' cd shared_modules/rsync/ && mkdir -p build && cd build && cmake -DINSTALL_PREFIX=/var/ossec .. && gmake -- Configuring done -- Generating done -- Build files have been written to: /opt/wazuh/src/shared_modules/rsync/build gmake[2]: Entering directory '/opt/wazuh/src/shared_modules/rsync/build' gmake[3]: Entering directory '/opt/wazuh/src/shared_modules/rsync/build' gmake[4]: Entering directory '/opt/wazuh/src/shared_modules/rsync/build' gmake[4]: Leaving directory '/opt/wazuh/src/shared_modules/rsync/build' [ 37%] Built target rsync gmake[4]: Entering directory '/opt/wazuh/src/shared_modules/rsync/build' gmake[4]: Leaving directory '/opt/wazuh/src/shared_modules/rsync/build' [100%] Built target rsync_test_tool gmake[3]: Leaving directory '/opt/wazuh/src/shared_modules/rsync/build' gmake[2]: Leaving directory '/opt/wazuh/src/shared_modules/rsync/build' cd wazuh_modules/syscollector/ && mkdir -p build && cd build && cmake -DINSTALL_PREFIX=/var/ossec .. && gmake -- Configuring done -- Generating done -- Build files have been written to: /opt/wazuh/src/wazuh_modules/syscollector/build gmake[2]: Entering directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[3]: Entering directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[4]: Entering directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[4]: Leaving directory '/opt/wazuh/src/wazuh_modules/syscollector/build' [ 66%] Built target syscollector gmake[4]: Entering directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[4]: Leaving directory '/opt/wazuh/src/wazuh_modules/syscollector/build' [100%] Built target syscollector_test_tool gmake[3]: Leaving directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[2]: Leaving directory '/opt/wazuh/src/wazuh_modules/syscollector/build' gmake[1]: Leaving directory '/opt/wazuh/src' gmake wazuh-maild - wazuh-csyslogd - wazuh-agentlessd - wazuh-execd - wazuh-logcollector - wazuh-remoted wazuh-agentd manage_agents utils active-responses wazuh-syscheckd wazuh-monitord wazuh-reportd wazuh-authd wazuh-analysisd wazuh-logtest-legacy wazuh-dbd - wazuh-integratord wazuh-modulesd wazuh-db gmake[1]: Entering directory '/opt/wazuh/src' grep: can't open /etc/os-release grep: can't open /etc/redhat-release gmake[1]: 'wazuh-maild' is up to date. gmake[1]: 'wazuh-csyslogd' is up to date. gmake[1]: 'wazuh-agentlessd' is up to date. gmake[1]: 'wazuh-execd' is up to date. gmake[1]: 'wazuh-logcollector' is up to date. gmake[1]: 'wazuh-remoted' is up to date. gmake[1]: 'wazuh-agentd' is up to date. gmake[1]: 'manage_agents' is up to date. gmake[1]: Nothing to be done for 'utils'. gmake[1]: Nothing to be done for 'active-responses'. gmake[1]: 'wazuh-syscheckd' is up to date. gmake[1]: 'wazuh-monitord' is up to date. gmake[1]: 'wazuh-reportd' is up to date. gmake[1]: 'wazuh-authd' is up to date. gmake[1]: 'wazuh-analysisd' is up to date. gmake[1]: 'wazuh-logtest-legacy' is up to date. gmake[1]: 'wazuh-dbd' is up to date. gmake[1]: 'wazuh-integratord' is up to date. gmake[1]: 'wazuh-modulesd' is up to date. gmake[1]: 'wazuh-db' is up to date. gmake[1]: Leaving directory '/opt/wazuh/src' gmake settings gmake[1]: Entering directory '/opt/wazuh/src' grep: can't open /etc/os-release grep: can't open /etc/redhat-release General settings: TARGET: local V: DEBUG: DEBUGAD INSTALLDIR: /var/ossec DATABASE: ONEWAY: no CLEANFULL: no RESOURCES_URL: https://packages.wazuh.com/deps/17 EXTERNAL_SRC_ONLY: User settings: WAZUH_GROUP: wazuh WAZUH_USER: wazuh USE settings: USE_ZEROMQ: no USE_GEOIP: no USE_PRELUDE: no USE_INOTIFY: no USE_BIG_ENDIAN: no USE_SELINUX: no USE_AUDIT: no DISABLE_SYSC: no DISABLE_CISCAT: no Mysql settings: includes: libs: Pgsql settings: includes: libs: Defines: -DOSSECHIDS -DUSER="wazuh" -DGROUPGLOBAL="wazuh" -DAIX -DAIX -D__unix -D_LINUX_SOURCE_COMPAT -DHIGHFIRST -DENABLE_SYSC -DENABLE_CISCAT -DLOCAL Compiler: CFLAGS -pthread -DNDEBUG -O2 -DOSSECHIDS -DUSER="wazuh" -DGROUPGLOBAL="wazuh" -DAIX -DAIX -D__unix -D_LINUX_SOURCE_COMPAT -DHIGHFIRST -DENABLE_SYSC -DENABLE_CISCAT -DLOCAL -pipe -Wall -Wextra -std=gnu99 -I./ -I./headers/ -Iexternal/openssl/include -Iexternal/cJSON/ -Iexternal/libyaml/include -Iexternal/curl/include -Iexternal/msgpack/include -Iexternal/bzip2/ -Ishared_modules/common -Ishared_modules/dbsync/include -Ishared_modules/rsync/include -Iwazuh_modules/syscollector/include -Idata_provider/include -Iexternal/libpcre2/include -I/builddir/output/include LDFLAGS -pthread -L./lib '-Wl,-blibpath:/var/ossec/lib:/usr/lib:/lib' -O2 -Lshared_modules/dbsync/build/lib -Lshared_modules/rsync/build/lib -Lwazuh_modules/syscollector/build/lib -Ldata_provider/build/lib LIBS CC gcc MAKE gmake gmake[1]: Leaving directory '/opt/wazuh/src' Done building local Wait for success... success Removing old SCA policies... Installing SCA policies... Installing additional SCA policies... grep: can't open /etc/os-release grep: can't open /etc/redhat-release cd external/cpython/ && export WPATH_LIB=/var/ossec/lib && export SOURCE_PATH=/opt/wazuh/src && export WAZUH_FFI_PATH=external/libffi/ && gmake install gmake[1]: Entering directory '/opt/wazuh/src/external/cpython' Creating directory /var/ossec/framework/python/bin install: The -c and -m flags may not be used together. Usage: install [-c dira] [-f dirb] [-i] [-m] [-M mode] [-O owner] [-G group] [-S] [-n dirc] [-o] [-s] file [dirx ...] Creating directory /var/ossec/framework/python/lib install: The -c and -m flags may not be used together. Usage: install [-c dira] [-f dirb] [-i] [-m] [-M mode] [-O owner] [-G group] [-S] [-n dirc] [-o] [-s] file [dirx ...] gmake[1]: *** [Makefile:1313: altbininstall] Error 2 gmake[1]: Leaving directory '/opt/wazuh/src/external/cpython' gmake: *** [Makefile:2134: install_python] Error 2 install: File root was not found. gmake: *** [Makefile:31: install] Error 2 install: File root was not found. gmake: *** [Makefile:29: install] Error 2 - System is AIX. - Init script modified to start Wazuh during boot. Starting Wazuh... local Starting Wazuh v4.3.11... Started wazuh-csyslogd... Started wazuh-dbd... 2023/04/19 10:52:30 wazuh-integratord: INFO: Remote integrations not configured. Clean exit. Started wazuh-integratord... Started wazuh-agentlessd... Started wazuh-db... Started wazuh-execd... 2023/04/19 10:52:30 wazuh-maild: INFO: E-Mail notification disabled. Clean Exit. Started wazuh-maild... Started wazuh-analysisd... Started wazuh-syscheckd... Started wazuh-logcollector... Started wazuh-monitord... Started wazuh-modulesd... Completed. - Configuration finished properly. - To start Wazuh: /var/ossec/bin/wazuh-control start - To stop Wazuh: /var/ossec/bin/wazuh-control stop - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf Thanks for using Wazuh. Please don't hesitate to contact us if you need help or find any bugs. Use our public Mailing List at: https://groups.google.com/forum/#!forum/wazuh More information can be found at: - http://www.wazuh.com --- Press ENTER to finish (maybe more information below). --- bash-5.1# /var/ossec/bin/wazuh-control restart Killing wazuh-modulesd... Killing wazuh-monitord... Killing wazuh-logcollector... Killing wazuh-syscheckd... wazuh-analysisd not running... wazuh-maild not running... Killing wazuh-execd... Killing wazuh-db... wazuh-agentlessd not running... wazuh-integratord not running... wazuh-dbd not running... wazuh-csyslogd not running... Wazuh v4.3.11 Stopped Starting Wazuh v4.3.11... Started wazuh-csyslogd... Started wazuh-dbd... 2023/04/19 10:52:53 wazuh-integratord: INFO: Remote integrations not configured. Clean exit. Started wazuh-integratord... Started wazuh-agentlessd... Started wazuh-db... Started wazuh-execd... 2023/04/19 10:52:53 wazuh-maild: INFO: E-Mail notification disabled. Clean Exit. Started wazuh-maild... Started wazuh-analysisd... Started wazuh-syscheckd... Started wazuh-logcollector... Started wazuh-monitord... Started wazuh-modulesd... Completed. ```
**Initial startup** :red_circle: As it has [already reported](https://github.com/wazuh/wazuh/issues/16392#issuecomment-1497932971) the MITRE database was not correctly created: ``` 023/04/19 10:55:14 wazuh-db: ERROR: Can't open SQLite database 'var/db/mitre.db': unable to open database file 2023/04/19 10:55:14 wazuh-analysisd: ERROR: Bad response from wazuh-db: Couldn't open DB mitre 2023/04/19 10:55:14 wazuh-analysisd: ERROR: Response from the Mitre database cannot be parsed. 2023/04/19 10:55:14 wazuh-analysisd: ERROR: Mitre matrix information could not be loaded. ``` This could affect the alerts' MITRE value fields.
ossec.log ``` 2023/04/19 10:55:13 wazuh-db: INFO: Started (pid: 10944620). 2023/04/19 10:55:13 wazuh-execd: INFO: Started (pid: 12648626). 2023/04/19 10:55:13 wazuh-maild: INFO: E-Mail notification disabled. Clean Exit. 2023/04/19 10:55:13 wazuh-syscheckd: INFO: Started (pid: 9896064). 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6003): Monitoring path: '/bin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6003): Monitoring path: '/boot', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6003): Monitoring path: '/etc', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6003): Monitoring path: '/sbin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6003): Monitoring path: '/usr/bin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6003): Monitoring path: '/usr/sbin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/mtab' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/hosts.deny' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/mail/statistics' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/random-seed' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/random.seed' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/adjtime' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/httpd/logs' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/utmpx' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/wtmpx' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/cups/certs' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/dumpdates' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/svc/volatile' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6207): Ignore 'file' sregex '.log$|.swp$' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6004): No diff for file: '/etc/ssl/private.key' 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6000): Starting daemon... 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 43200 seconds 2023/04/19 10:55:13 wazuh-syscheckd: INFO: (6008): File integrity monitoring scan started. 2023/04/19 10:55:13 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/ossec/logs/active-responses.log'. 2023/04/19 10:55:13 wazuh-logcollector: INFO: Monitoring output of command(360): df -P 2023/04/19 10:55:13 wazuh-logcollector: INFO: Monitoring full output of command(360): netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d 2023/04/19 10:55:13 wazuh-logcollector: INFO: Monitoring full output of command(360): last -n 20 2023/04/19 10:55:13 rootcheck: INFO: Starting rootcheck scan. 2023/04/19 10:55:13 wazuh-logcollector: INFO: Started (pid: 9633934). 2023/04/19 10:55:13 wazuh-monitord: INFO: Started (pid: 13107234). 2023/04/19 10:55:13 wazuh-modulesd: INFO: Started (pid: 5046462). 2023/04/19 10:55:13 wazuh-modulesd:agent-upgrade: INFO: (8153): Module Agent Upgrade started. 2023/04/19 10:55:13 wazuh-modulesd:ciscat: INFO: Module disabled. Exiting... 2023/04/19 10:55:13 wazuh-modulesd:osquery: INFO: Module disabled. Exiting... 2023/04/19 10:55:13 wazuh-modulesd:database: INFO: Module started. 2023/04/19 10:55:13 wazuh-modulesd:download: INFO: Module started. 2023/04/19 10:55:13 wazuh-modulesd:task-manager: INFO: (8200): Module Task Manager started. 2023/04/19 10:55:14 wazuh-analysisd: INFO: Total rules enabled: '6327' 2023/04/19 10:55:14 wazuh-analysisd: INFO: Started (pid: 13041816). 2023/04/19 10:55:14 wazuh-db: ERROR: Can't open SQLite database 'var/db/mitre.db': unable to open database file 2023/04/19 10:55:14 wazuh-analysisd: ERROR: Bad response from wazuh-db: Couldn't open DB mitre 2023/04/19 10:55:14 wazuh-analysisd: ERROR: Response from the Mitre database cannot be parsed. 2023/04/19 10:55:14 wazuh-analysisd: ERROR: Mitre matrix information could not be loaded. 2023/04/19 10:55:14 wazuh-analysisd: INFO: (7200): Logtest started 2023/04/19 11:00:19 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended. 2023/04/19 11:00:28 rootcheck: INFO: Ending rootcheck scan. ```

Installation - Packages :red_circle: Unexpected errors at installation time: ``` bash-5.1# rpm -ivh ./wazuh-local-4.3.11-1.aix7.1.ppc.rpm Verifying... ################################# [100%] Preparing... ################################# [100%] Updating / installing... 1:wazuh-local-4.3.11-1 ################################# [100%] rpm_share: 0645-024 Unable to access directory /var/ossec/queue/vulnerabilities rpm_share: 0645-007 ATTENTION: update_dir() returned an unexpected result. rpm_share: 0645-007 ATTENTION: update_inst_root() returned an unexpected result. 2023/04/25 10:18:56 wazuh-analysisd: ERROR: Bad response from wazuh-db: Couldn't open DB mitre 2023/04/25 10:18:56 wazuh-analysisd: ERROR: Response from the Mitre database cannot be parsed. 2023/04/25 10:18:56 wazuh-analysisd: ERROR: Mitre matrix information could not be loaded. ``` In addition, Mitre database is not working as expected producing same warnings and errors in the manager logs than installing by sources: ``` 2023/04/25 10:18:56 wazuh-analysisd: ERROR: Bad response from wazuh-db: Couldn't open DB mitre 2023/04/25 10:18:56 wazuh-analysisd: ERROR: Response from the Mitre database cannot be parsed. 2023/04/25 10:18:56 wazuh-analysisd: ERROR: Mitre matrix information could not be loaded. ```

Upgrade

Upgrade - Sources :red_circle: Updating the manager by source fails. Updating process does not detect that the current installation mode is local and tries to install the manager, making the process fails: ``` Pgsql settings: includes: libs: Defines: -DOSSECHIDS -DUSER="wazuh" -DGROUPGLOBAL="wazuh" -DAIX -DAIX -D__unix -D_LINUX_SOURCE_COMPAT -DHIGHFIRST -DENABLE_SYSC -DENABLE_CISCAT Compiler: CFLAGS -pthread -DNDEBUG -O2 -DOSSECHIDS -DUSER="wazuh" -DGROUPGLOBAL="wazuh" -DAIX -DAIX -D__unix -D_LINUX_SOURCE_COMPAT -DHIGHFIRST -DENABLE_SYSC -DENABLE_CISCAT -pipe -Wall -Wextra -std=gnu99 -I./ -I./headers/ -Iexternal/openssl/include -Iexternal/cJSON/ -Iexternal/libyaml/include -Iexternal/curl/include -Iexternal/msgpack/include -Iexternal/bzip2/ -Ishared_modules/common -Ishared_modules/dbsync/include -Ishared_modules/rsync/include -Iwazuh_modules/syscollector/include -Idata_provider/include -Iexternal/libpcre2/include -I/builddir/output/include LDFLAGS -pthread -L./lib '-Wl,-blibpath:/var/ossec/lib:/usr/lib:/lib' -O2 -Lshared_modules/dbsync/build/lib -Lshared_modules/rsync/build/lib -Lwazuh_modules/syscollector/build/lib -Ldata_provider/build/lib LIBS CC gcc MAKE gmake gmake[1]: Leaving directory '/opt/wazuh/src' Done building server Stopping Wazuh... Wait for success... success Removing old SCA policies... Installing SCA policies... Installing additional SCA policies... grep: can't open /etc/os-release grep: can't open /etc/redhat-release cd external/cpython/ && export WPATH_LIB=/var/ossec/lib && export SOURCE_PATH=/opt/wazuh/src && export WAZUH_FFI_PATH=external/libffi/ && gmake install gmake[1]: Entering directory '/opt/wazuh/src/external/cpython' Creating directory /var/ossec/framework/python/bin install: The -c and -m flags may not be used together. Usage: install [-c dira] [-f dirb] [-i] [-m] [-M mode] [-O owner] [-G group] [-S] [-n dirc] [-o] [-s] file [dirx ...] Creating directory /var/ossec/framework/python/lib install: The -c and -m flags may not be used together. Usage: install [-c dira] [-f dirb] [-i] [-m] [-M mode] [-O owner] [-G group] [-S] [-n dirc] [-o] [-s] file [dirx ...] gmake[1]: *** [Makefile:1313: altbininstall] Error 2 gmake[1]: Leaving directory '/opt/wazuh/src/external/cpython' gmake: *** [Makefile:2134: install_python] Error 2 install: File root was not found. gmake: *** [Makefile:31: install] Error 2 /bin/sh: pgrep: not found install: File root was not found. gmake: *** [Makefile:29: install] Error 2 find: bad option -maxdepth find: bad option -maxdepth - System is AIX. - Init script modified to start Wazuh during boot. Wait for success... success Searching for deprecated rules and decoders... Starting Wazuh... /var/ossec/bin/wazuh-control[4]: /var/ossec/bin/wazuh-apid: not found wazuh-apid: Configuration error. Exiting - Configuration finished properly. - To start Wazuh: /var/ossec/bin/wazuh-control start - To stop Wazuh: /var/ossec/bin/wazuh-control stop - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf Thanks for using Wazuh. Please don't hesitate to contact us if you need help or find any bugs. Use our public Mailing List at: https://groups.google.com/forum/#!forum/wazuh More information can be found at: - http://www.wazuh.com --- Press ENTER to finish (maybe more information below). --- - Upgrade completed. bash-5.1# ```
Full upgrade log ``` grep: can't open /etc/os-release grep: can't open /etc/redhat-release cd external/cpython/ && export WPATH_LIB=/var/ossec/lib && export SOURCE_PATH=/opt/wazuh/src && export WAZUH_FFI_PATH=external/libffi/ && gmake install gmake[1]: Entering directory '/opt/wazuh/src/external/cpython' Creating directory /var/ossec/framework/python/bin install: The -c and -m flags may not be used together. Usage: install [-c dira] [-f dirb] [-i] [-m] [-M mode] [-O owner] [-G group] [-S] [-n dirc] [-o] [-s] file [dirx ...] Creating directory /var/ossec/framework/python/lib install: The -c and -m flags may not be used together. Usage: install [-c dira] [-f dirb] [-i] [-m] [-M mode] [-O owner] [-G group] [-S] [-n dirc] [-o] [-s] file [dirx ...] gmake[1]: *** [Makefile:1313: altbininstall] Error 2 gmake[1]: Leaving directory '/opt/wazuh/src/external/cpython' gmake: *** [Makefile:2134: install_python] Error 2 install: File root was not found. gmake: *** [Makefile:31: install] Error 2 /bin/sh: pgrep: not found install: File root was not found. gmake: *** [Makefile:29: install] Error 2 find: bad option -maxdepth find: bad option -maxdepth - System is AIX. - Init script modified to start Wazuh during boot. Wait for success... success Searching for deprecated rules and decoders... Starting Wazuh... /var/ossec/bin/wazuh-control[4]: /var/ossec/bin/wazuh-apid: not found wazuh-apid: Configuration error. Exiting - Configuration finished properly. - To start Wazuh: /var/ossec/bin/wazuh-control start - To stop Wazuh: /var/ossec/bin/wazuh-control stop - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf Thanks for using Wazuh. Please don't hesitate to contact us if you need help or find any bugs. Use our public Mailing List at: https://groups.google.com/forum/#!forum/wazuh More information can be found at: - http://www.wazuh.com --- Press ENTER to finish (maybe more information below). --- - Upgrade completed. bash-5.1# ba ```
Upgrade - Packages :red_circle: Upgrade local installation does not work. Upgrade process produces conflics errors: ``` bash-5.1# rpm -ivh ./wazuh-local-4.3.11-1.aix7.1.ppc.rpm error: Failed dependencies: wazuh-local conflicts with wazuh-local-4.3.11-1.ppc wazuh-local conflicts with (installed) wazuh-local-4.3.11-1.ppc ```

Uninstall

Uninstall - Sources :green_circle: ``` bash-5.1# WAZUH_HOME="/var/ossec/" bash-5.1# /var/ossec/bin/wazuh-control stop wazuh-clusterd not running... wazuh-modulesd not running... wazuh-monitord not running... wazuh-logcollector not running... wazuh-remoted not running... wazuh-syscheckd not running... wazuh-analysisd not running... wazuh-maild not running... wazuh-execd not running... wazuh-db not running... wazuh-authd not running... wazuh-agentlessd not running... wazuh-integratord not running... wazuh-dbd not running... wazuh-csyslogd not running... wazuh-apid not running... Wazuh v4.3.11 Stopped bash-5.1# rm -rf $WAZUH_HOME bash-5.1# find /etc/rc.d -name "*wazuh*" | xargs rm -f bash-5.1# userdel wazuh 2> /dev/null bash-5.1# groupdel wazuh 2> /dev/null ```
Uninstall - Packages :green_circle: ``` bash-5.1# rpm -e wazuh-local warning: /var/ossec/etc/ossec.conf saved as /var/ossec/etc/ossec.conf.rpmsave ```

Syscheck

Basic Case - File Creation :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing ```
The path is correctly monitored: ``` 2023/04/20 03:45:17 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. ``` Alert was correctly triggered: ``` {"timestamp":"2023-04-20T03:46:23.392CDT","rule":{"level":5,"description":"File added to the system.","id":"554","firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_added","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681980383.15550","full_log":"File '/tmp/syscheck-testing/file0' added\nMode: scheduled\n","syscheck":{"path":"/tmp/syscheck-testing/file0","mode":"scheduled","size_after":"8","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_after":"ec4d59b2732f2f153240a8ff746282a6","sha1_after":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c","sha256_after":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4","uname_after":"root","gname_after":"system","mtime_after":"2023-04-20T03:46:14","inode_after":8198,"event":"added"},"decoder":{"name":"syscheck_new_entry"},"location":"syscheck"} ```
Basic Case - File Update (File content) :red_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing ```
The path is correctly monitored: ``` 2023/04/20 03:45:17 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. ``` The expected alert was triggered: ``` {"timestamp":"2023-04-20T03:48:02.424CDT","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681980482.16235","full_log":"File '/tmp/syscheck-testing/file0' modified\nMode: scheduled\nChanged attributes: size,mtime,md5,sha1,sha256\nSize changed from '8' to '16'\nOld modification time was: '1681980374', now it is '1681980481'\nOld md5sum was: 'ec4d59b2732f2f153240a8ff746282a6'\nNew md5sum is : '5d3d0a09d02278b66e1cb1246206ce6c'\nOld sha1sum was: 'e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c'\nNew sha1sum is : '9431dff60933b9a80e5cfd0905a76e1704c743b1'\nOld sha256sum was: '41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4'\nNew sha256sum is : '3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d'\n","syscheck":{"path":"/tmp/syscheck-testing/file0","mode":"scheduled","size_before":"8","size_after":"16","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_before":"ec4d59b2732f2f153240a8ff746282a6","md5_after":"5d3d0a09d02278b66e1cb1246206ce6c","sha1_before":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c","sha1_after":"9431dff60933b9a80e5cfd0905a76e1704c743b1","sha256_before":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4","sha256_after":"3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d","uname_after":"root","gname_after":"system","mtime_before":"2023-04-20T03:46:14","mtime_after":"2023-04-20T03:48:01","inode_after":8198,"changed_attributes":["size","mtime","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"} ``` However, the following warning is generated: ``` 2023/04/20 03:48:02 wazuh-analysisd: WARNING: Mitre Technique ID 'T1565.001' not found in database. ``` In the AIX manager, the MITRE database does not work properly. This will cause the absence of this information in all the alerts. In this case, it was expected to include the following MITRE information: ``` "mitre":{ "id":[ "T1565.001" ], "tactic":[ "Impact" ], "technique":[ "Stored Data Manipulation" ] }, ``` > **Note** > This issue will not be taken into account in other checks to avoid duplication
Basic Case - File Update (File permissions) :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing ```
File integrity alerts should be triggered in the case of permissions changing. In this case, we have to change the permissions of the `/tmp/syscheck-testing/file` file to `777` ``` chmod 777 /tmp/syscheck-testing/file ``` The expected alert was correctly triggered: ``` { "timestamp":"2023-04-20T04:45:32.930CDT", "rule":{ "level":7, "description":"Integrity checksum changed.", "id":"550", "firedtimes":2, "mail":false, "groups":[ "ossec", "syscheck", "syscheck_entry_modified", "syscheck_file" ], "pci_dss":[ "11.5" ], "gpg13":[ "4.11" ], "gdpr":[ "II_5.1.f" ], "hipaa":[ "164.312.c.1", "164.312.c.2" ], "nist_800_53":[ "SI.7" ], "tsc":[ "PI1.4", "PI1.5", "CC6.1", "CC6.8", "CC7.2", "CC7.3" ] }, "agent":{ "id":"000", "name":"vrebollo" }, "manager":{ "name":"vrebollo" }, "id":"1681983932.35844", "full_log":"File '/tmp/syscheck-testing/file' modified\nMode: scheduled\nChanged attributes: permission\nPermissions changed from 'rw-r--r--' to 'rwxrwxrwx'\n", "syscheck":{ "path":"/tmp/syscheck-testing/file", "mode":"scheduled", "size_after":"16", "perm_before":"rw-r--r--", "perm_after":"rwxrwxrwx", "uid_after":"0", "gid_after":"0", "md5_after":"5d3d0a09d02278b66e1cb1246206ce6c", "sha1_after":"9431dff60933b9a80e5cfd0905a76e1704c743b1", "sha256_after":"3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d", "uname_after":"root", "gname_after":"system", "mtime_after":"2023-04-20T04:45:14", "inode_after":8219, "changed_attributes":[ "permission" ], "event":"modified" }, "decoder":{ "name":"syscheck_integrity_changed" }, "location":"syscheck" } ```
Basic Case - File Update (File user and group owners) :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing ```
File integrity alerts should be triggered in the case of file owner changes. In this case, we have changed the owner of the `/tmp/syscheck-testing/file` file to `guest:wazuh` ``` chown guest:wazuh /tmp/syscheck-testing/file_testing_owners.log ``` The expected alert was triggered ``` { "timestamp":"2023-04-20T05:02:16.362CDT", "rule":{ "level":7, "description":"Integrity checksum changed.", "id":"550", "firedtimes":3, "mail":false, "groups":[ "ossec", "syscheck", "syscheck_entry_modified", "syscheck_file" ], "pci_dss":[ "11.5" ], "gpg13":[ "4.11" ], "gdpr":[ "II_5.1.f" ], "hipaa":[ "164.312.c.1", "164.312.c.2" ], "nist_800_53":[ "SI.7" ], "tsc":[ "PI1.4", "PI1.5", "CC6.1", "CC6.8", "CC7.2", "CC7.3" ] }, "agent":{ "id":"000", "name":"vrebollo" }, "manager":{ "name":"vrebollo" }, "id":"1681984936.41966", "full_log":"File '/tmp/syscheck-testing/file_testing_owners.log' modified\nMode: scheduled\nChanged attributes: uid,user_name,gid,group_name\nOwnership was '0', now it is '100'\nUser name was 'root', now it is 'guest'\nGroup ownership was '0', now it is '209'\nGroup name was 'system', now it is 'wazuh'\n", "syscheck":{ "path":"/tmp/syscheck-testing/file_testing_owners.log", "mode":"scheduled", "size_after":"16", "perm_after":"rw-r--r--", "uid_before":"0", "uid_after":"100", "gid_before":"0", "gid_after":"209", "md5_after":"5d3d0a09d02278b66e1cb1246206ce6c", "sha1_after":"9431dff60933b9a80e5cfd0905a76e1704c743b1", "sha256_after":"3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d", "uname_before":"root", "uname_after":"guest", "gname_before":"system", "gname_after":"wazuh", "mtime_after":"2023-04-20T04:59:53", "inode_after":8221, "changed_attributes":[ "uid", "user_name", "gid", "group_name" ], "event":"modified" }, "decoder":{ "name":"syscheck_integrity_changed" }, "location":"syscheck" } ```
Basic Case - File deletion :red_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing ```
The path was correctly monitored: ``` 2023/04/20 03:45:17 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. ``` The expected alert was triggered: ``` {"timestamp":"2023-04-20T03:51:20.503CDT","rule":{"level":7,"description":"File deleted.","id":"553","firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_deleted","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681980680.17464","full_log":"File '/tmp/syscheck-testing/file0' deleted\nMode: scheduled\n","syscheck":{"path":"/tmp/syscheck-testing/file0","mode":"scheduled","size_after":"16","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_after":"5d3d0a09d02278b66e1cb1246206ce6c","sha1_after":"9431dff60933b9a80e5cfd0905a76e1704c743b1","sha256_after":"3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d","uname_after":"root","gname_after":"system","mtime_after":"2023-04-20T03:48:01","inode_after":8198,"event":"deleted"},"decoder":{"name":"syscheck_deleted"},"location":"syscheck"} ``` However, the following warning is generated: ``` 2023/04/20 03:51:20 wazuh-analysisd: WARNING: Mitre Technique ID 'T1070.004' not found in database. 2023/04/20 03:51:20 wazuh-analysisd: WARNING: Mitre Technique ID 'T1485' not found in database. ``` In AIX manager, MITRE database do not work properly. This will cause the absense of mitre information of all the alerts. In this case, in this case it was expected to include the following MITRE information: ``` "mitre":{ "id":[ "T1070.004", "T1485" ], "tactic":[ "Defense Evasion", "Impact" ], "technique":[ "File Deletion", "Data Destruction" ] }, ``` > **Note** > This issue will not be taken into account in other checks to avoid duplication
Multiple Directory Monitoring - Explicit configuration :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing1 /tmp/syscheck-testing2 /tmp/syscheck-testing3 /tmp/syscheck-testing4 /tmp/syscheck-testing5 /tmp/syscheck-testing6 /tmp/syscheck-testing7 /tmp/syscheck-testing8 /tmp/syscheck-testing9 ```
Specified directories were correctly monitored: ``` 2023/04/20 03:53:20 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing1', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/20 03:53:20 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing2', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/20 03:53:20 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing3', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/20 03:53:20 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing4', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/20 03:53:20 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing5', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/20 03:53:20 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing6', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/20 03:53:20 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing7', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/20 03:53:20 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing8', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/20 03:53:20 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing9', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. ``` File creation alerts were correctly triggered: ``` {"timestamp":"2023-04-20T03:55:32.616CDT","rule":{"level":5,"description":"File added to the system.","id":"554","firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_added","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681980932.20739","full_log":"File '/tmp/syscheck-testing1/file0.log' added\nMode: scheduled\n","syscheck":{"path":"/tmp/syscheck-testing1/file0.log","mode":"scheduled","size_after":"8","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_after":"ec4d59b2732f2f153240a8ff746282a6","sha1_after":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c","sha256_after":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4","uname_after":"root","gname_after":"system","mtime_after":"2023-04-20T03:55:24","inode_after":8209,"event":"added"},"decoder":{"name":"syscheck_new_entry"},"location":"syscheck"} {"timestamp":"2023-04-20T03:55:43.620CDT","rule":{"level":5,"description":"File added to the system.","id":"554","firedtimes":2,"mail":false,"groups":["ossec","syscheck","syscheck_entry_added","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681980943.21429","full_log":"File '/tmp/syscheck-testing2/file0.log' added\nMode: scheduled\n","syscheck":{"path":"/tmp/syscheck-testing2/file0.log","mode":"scheduled","size_after":"8","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_after":"ec4d59b2732f2f153240a8ff746282a6","sha1_after":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c","sha256_after":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4","uname_after":"root","gname_after":"system","mtime_after":"2023-04-20T03:55:41","inode_after":8210,"event":"added"},"decoder":{"name":"syscheck_new_entry"},"location":"syscheck"} {"timestamp":"2023-04-20T03:55:54.625CDT","rule":{"level":5,"description":"File added to the system.","id":"554","firedtimes":3,"mail":false,"groups":["ossec","syscheck","syscheck_entry_added","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681980954.22119","full_log":"File '/tmp/syscheck-testing3/file0.log' added\nMode: scheduled\n","syscheck":{"path":"/tmp/syscheck-testing3/file0.log","mode":"scheduled","size_after":"8","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_after":"ec4d59b2732f2f153240a8ff746282a6","sha1_after":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c","sha256_after":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4","uname_after":"root","gname_after":"system","mtime_after":"2023-04-20T03:55:45","inode_after":8211,"event":"added"},"decoder":{"name":"syscheck_new_entry"},"location":"syscheck"} {"timestamp":"2023-04-20T03:55:54.625CDT","rule":{"level":5,"description":"File added to the system.","id":"554","firedtimes":4,"mail":false,"groups":["ossec","syscheck","syscheck_entry_added","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681980954.22809","full_log":"File '/tmp/syscheck-testing4/file0.log' added\nMode: scheduled\n","syscheck":{"path":"/tmp/syscheck-testing4/file0.log","mode":"scheduled","size_after":"8","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_after":"ec4d59b2732f2f153240a8ff746282a6","sha1_after":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c","sha256_after":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4","uname_after":"root","gname_after":"system","mtime_after":"2023-04-20T03:55:48","inode_after":8212,"event":"added"},"decoder":{"name":"syscheck_new_entry"},"location":"syscheck"} ... ```
Multiple directory Monitoring - Regex :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing/regex-testing* ```
In this case, it is expected to monitor all the directories that match `/tmp/syscheck-testing/regex-testing*` regex dynamically (directories will not be created before starting the manager ). After starting the manager, we create two directories: ``` mkdir /tmp/syscheck-testing/regex-testing1 mkdir /tmp/syscheck-testing/NoMonitor/ ``` And create two files in each directory: ``` bash-5.1# echo "Testing" >> /tmp/syscheck-testing/NoMonitor/file bash-5.1# echo "Testing" >> /tmp/syscheck-testing/regex-testing1/file ``` As expected, it is only triggered an alert for the regex-testing1 directory: ``` {"timestamp":"2023-04-20T04:05:16.234CDT","rule":{"level":5,"description":"File added to the system.","id":"554","firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_added","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681981516.26796","full_log":"File '/tmp/syscheck-testing/regex-testing1/file' added\nMode: scheduled\n","syscheck":{"path":"/tmp/syscheck-testing/regex-testing1/file","mode":"scheduled","size_after":"8","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_after":"ec4d59b2732f2f153240a8ff746282a6","sha1_after":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c","sha256_after":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4","uname_after":"root","gname_after":"system","mtime_after":"2023-04-20T04:05:12","inode_after":8218,"event":"added"},"decoder":{"name":"syscheck_new_entry"},"location":"syscheck"} ``` After restarting the manager, we can see that it is only monitoring the expected directory: ``` 2023/04/20 04:06:06 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing/regex-testing1', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. ```
Check Options :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing ```
The directory was monitored with the specified option: ``` 2023/04/20 05:04:17 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. ``` Also in case of disabling specified checks, these changes are correctly applied to the file monitoring. In this case we have disabled the `check_perm` option: ``` 2023/04/20 05:24:48 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing', with options 'size | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. ```
Report Changes - Basic :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing ```
Report changes work as expected. In this case we have added the `Testing` string to the `/tmp/syscheck-testing/file.log` file: `echo "Testing" >> /tmp/syscheck-testing/file.log` The expected alert was triggered: ``` { "timestamp":"2023-04-20T05:28:55.503CDT", "rule":{ "level":7, "description":"Integrity checksum changed.", "id":"550", "firedtimes":1, "mail":false, "groups":[ "ossec", "syscheck", "syscheck_entry_modified", "syscheck_file" ], "pci_dss":[ "11.5" ], "gpg13":[ "4.11" ], "gdpr":[ "II_5.1.f" ], "hipaa":[ "164.312.c.1", "164.312.c.2" ], "nist_800_53":[ "SI.7" ], "tsc":[ "PI1.4", "PI1.5", "CC6.1", "CC6.8", "CC7.2", "CC7.3" ] }, "agent":{ "id":"000", "name":"vrebollo" }, "manager":{ "name":"vrebollo" }, "id":"1681986535.55494", "full_log":"File '/tmp/syscheck-testing/file.log' modified\nMode: scheduled\nChanged attributes: size,mtime,md5,sha1,sha256\nSize changed from '8' to '16'\nOld modification time was: '1681986494', now it is '1681986531'\nOld md5sum was: 'ec4d59b2732f2f153240a8ff746282a6'\nNew md5sum is : '5d3d0a09d02278b66e1cb1246206ce6c'\nOld sha1sum was: 'e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c'\nNew sha1sum is : '9431dff60933b9a80e5cfd0905a76e1704c743b1'\nOld sha256sum was: '41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4'\nNew sha256sum is : '3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d'\n", "syscheck":{ "path":"/tmp/syscheck-testing/file.log", "mode":"scheduled", "size_before":"8", "size_after":"16", "perm_after":"rw-r--r--", "uid_after":"0", "gid_after":"0", "md5_before":"ec4d59b2732f2f153240a8ff746282a6", "md5_after":"5d3d0a09d02278b66e1cb1246206ce6c", "sha1_before":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c", "sha1_after":"9431dff60933b9a80e5cfd0905a76e1704c743b1", "sha256_before":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4", "sha256_after":"3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d", "uname_after":"root", "gname_after":"system", "mtime_before":"2023-04-20T05:28:14", "mtime_after":"2023-04-20T05:28:51", "inode_after":8222, "diff":"1a2\n> Testing\n", "changed_attributes":[ "size", "mtime", "md5", "sha1", "sha256" ], "event":"modified" }, "decoder":{ "name":"syscheck_integrity_changed" }, "location":"syscheck" } ``` Expected diff field was generated: `"diff":"1a2\n> Testing\n",`
Report Changes - Advanced :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing yes 10KB yes 1KB tmp/syscheck-testing/secret ```
Using this configuration, alerts should contain the diff value if the filesize is less than 1KB. In this case, we have to edit the file `/tmp/syscheck-testing/file_new` ``` { "timestamp":"2023-04-20T07:09:24.949CDT", "rule":{ "level":7, "description":"Integrity checksum changed.", "id":"550", "firedtimes":1, "mail":false, "groups":[ "ossec", "syscheck", "syscheck_entry_modified", "syscheck_file" ], "pci_dss":[ "11.5" ], "gpg13":[ "4.11" ], "gdpr":[ "II_5.1.f" ], "hipaa":[ "164.312.c.1", "164.312.c.2" ], "nist_800_53":[ "SI.7" ], "tsc":[ "PI1.4", "PI1.5", "CC6.1", "CC6.8", "CC7.2", "CC7.3" ] }, "agent":{ "id":"000", "name":"vrebollo" }, "manager":{ "name":"vrebollo" }, "id":"1681992564.1590260", "full_log":"File '/tmp/syscheck-testing/file_new' modified\nMode: scheduled\nChanged attributes: size,mtime,md5,sha1,sha256\nSize changed from '136' to '144'\nOld modification time was: '1681991819', now it is '1681992552'\nOld md5sum was: '49464141d309e003dc0876650c33692b'\nNew md5sum is : '781d3c8ab2c1528fff72988a0c2b9b40'\nOld sha1sum was: '82467dc6544404d8ab745dd42d307b07626e08cf'\nNew sha1sum is : 'e3c2fd415ffba4f9326e528822783233af42e85e'\nOld sha256sum was: 'fead7d979ee7e9e776377fa3074c4d120153c676f80bfc4d09842959bd205558'\nNew sha256sum is : '77b66f681090b50af29210583f4c9b67e0074cec7b91f8d2f5162a823c04682d'\n", "syscheck":{ "path":"/tmp/syscheck-testing/file_new", "mode":"scheduled", "size_before":"136", "size_after":"144", "perm_after":"rw-r--r--", "uid_after":"0", "gid_after":"0", "md5_before":"49464141d309e003dc0876650c33692b", "md5_after":"781d3c8ab2c1528fff72988a0c2b9b40", "sha1_before":"82467dc6544404d8ab745dd42d307b07626e08cf", "sha1_after":"e3c2fd415ffba4f9326e528822783233af42e85e", "sha256_before":"fead7d979ee7e9e776377fa3074c4d120153c676f80bfc4d09842959bd205558", "sha256_after":"77b66f681090b50af29210583f4c9b67e0074cec7b91f8d2f5162a823c04682d", "uname_after":"root", "gname_after":"system", "mtime_before":"2023-04-20T06:56:59", "mtime_after":"2023-04-20T07:09:12", "inode_after":8280, "diff":"17a18\n> Testing\n", "changed_attributes":[ "size", "mtime", "md5", "sha1", "sha256" ], "event":"modified" }, "decoder":{ "name":"syscheck_integrity_changed" }, "location":"syscheck" } ``` We can see that the alert contains the diff value. In the case of increasing the file size, we see that this value is no longer in the alert: ``` { "timestamp":"2023-04-20T07:10:25.697CDT", "rule":{ "level":7, "description":"Integrity checksum changed.", "id":"550", "firedtimes":3, "mail":false, "groups":[ "ossec", "syscheck", "syscheck_entry_modified", "syscheck_file" ], "pci_dss":[ "11.5" ], "gpg13":[ "4.11" ], "gdpr":[ "II_5.1.f" ], "hipaa":[ "164.312.c.1", "164.312.c.2" ], "nist_800_53":[ "SI.7" ], "tsc":[ "PI1.4", "PI1.5", "CC6.1", "CC6.8", "CC7.2", "CC7.3" ] }, "agent":{ "id":"000", "name":"vrebollo" }, "manager":{ "name":"vrebollo" }, "id":"1681992625.1593966", "full_log":"File '/tmp/syscheck-testing/file_new' modified\nMode: scheduled\nChanged attributes: size,mtime,md5,sha1,sha256\nSize changed from '2806' to '3411'\nOld modification time was: '1681992613', now it is '1681992615'\nOld md5sum was: '98ed324ca0e83d104203622441a1d75c'\nNew md5sum is : '12218c31ab38db291050a76c6ae86b91'\nOld sha1sum was: '4638c264bd1941fe57789e2ca86769979ce40aae'\nNew sha1sum is : '69a7849229983734396bdd1adb1a8fedd09cee30'\nOld sha256sum was: 'b1d79bed5e3667218166708e95dd7bdc3fde17c3e5a1b43ce0666ccd7ef8d12f'\nNew sha256sum is : '55535868b8bc5fb732857e2957ee72a2ba0591c0e225c8cf2ae0d2f48d9b48b8'\n", "syscheck":{ "path":"/tmp/syscheck-testing/file_new", "mode":"scheduled", "size_before":"2806", "size_after":"3411", "perm_after":"rw-r--r--", "uid_after":"0", "gid_after":"0", "md5_before":"98ed324ca0e83d104203622441a1d75c", "md5_after":"12218c31ab38db291050a76c6ae86b91", "sha1_before":"4638c264bd1941fe57789e2ca86769979ce40aae", "sha1_after":"69a7849229983734396bdd1adb1a8fedd09cee30", "sha256_before":"b1d79bed5e3667218166708e95dd7bdc3fde17c3e5a1b43ce0666ccd7ef8d12f", "sha256_after":"55535868b8bc5fb732857e2957ee72a2ba0591c0e225c8cf2ae0d2f48d9b48b8", "uname_after":"root", "gname_after":"system", "mtime_before":"2023-04-20T07:10:13", "mtime_after":"2023-04-20T07:10:15", "inode_after":8280, "changed_attributes":[ "size", "mtime", "md5", "sha1", "sha256" ], "event":"modified" }, "decoder":{ "name":"syscheck_integrity_changed" }, "location":"syscheck" } ``` In addition, alerts for the files whose size is less than 1KB will include the diff value until the quota is achieved: ``` {"timestamp":"2023-04-20T07:17:38.774CDT","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681993058.1600695","full_log":"File '/tmp/syscheck-testing/file_new_2' modified\nMode: scheduled\nChanged attributes: size,mtime,md5,sha1,sha256\nSize changed from '121' to '363'\nOld modification time was: '1681992994', now it is '1681993053'\nOld md5sum was: 'c6623c5aa86bee58c31d7b71584a7d65'\nNew md5sum is : 'd9f4a0201f328b015658732a779bc136'\nOld sha1sum was: '9dc07198c4ba4bbb4e67a4d9b1354ff55e1ce9fd'\nNew sha1sum is : '369261f690d75b7da12d1f43ebd9bb59c5976ae7'\nOld sha256sum was: '06932c12877a7084197ec68b5e02016a66315354cf845afd7b41f370ec7ac6bf'\nNew sha256sum is : '9b9caf411e233a2e527dbd96e2696f85b88ee4f9f52ee9c4e7df511c72b948a0'\n","syscheck":{"path":"/tmp/syscheck-testing/file_new_2","mode":"scheduled","size_before":"121","size_after":"363","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_before":"c6623c5aa86bee58c31d7b71584a7d65","md5_after":"d9f4a0201f328b015658732a779bc136","sha1_before":"9dc07198c4ba4bbb4e67a4d9b1354ff55e1ce9fd","sha1_after":"369261f690d75b7da12d1f43ebd9bb59c5976ae7","sha256_before":"06932c12877a7084197ec68b5e02016a66315354cf845afd7b41f370ec7ac6bf","sha256_after":"9b9caf411e233a2e527dbd96e2696f85b88ee4f9f52ee9c4e7df511c72b948a0","uname_after":"root","gname_after":"system","mtime_before":"2023-04-20T07:16:34","mtime_after":"2023-04-20T07:17:33","inode_after":8281,"diff":"1a2,3\n> AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\n> AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\n","changed_attributes":["size","mtime","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"} ... -- After increasing the disk quota to 10KB {"timestamp":"2023-04-20T07:26:54.446CDT","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","firedtimes":73,"mail":false,"groups":["ossec","syscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681993614.1756117","full_log":"File '/tmp/syscheck-testing/file3' modified\nMode: scheduled\nChanged attributes: size,mtime,md5,sha1,sha256\nSize changed from '2222856' to '2262048'\nOld modification time was: '1681993586', now it is '1681993614'\nOld md5sum was: '51f509d5c49a1227a30fa7162d0dfd47'\nNew md5sum is : '5b8883a3c763cddc0c2087d83d6b5676'\nOld sha1sum was: '12bf03d21072c3e001c699ea4c552d829f0e6ea8'\nNew sha1sum is : '3edfe7d9f893e399974817757f7e7811d154cfa5'\nOld sha256sum was: 'b48e3fbcadc9a42b1c6c59fde7c1a7eaa8c5c33c262bc1ed8e6eee8b80322c0f'\nNew sha256sum is : 'b46df3cdfb7c4856605f3093bef63302f2460b648f39f82c15870248f0d30daa'\n","syscheck":{"path":"/tmp/syscheck-testing/file3","mode":"scheduled","size_before":"2222856","size_after":"2262048","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_before":"51f509d5c49a1227a30fa7162d0dfd47","md5_after":"5b8883a3c763cddc0c2087d83d6b5676","sha1_before":"12bf03d21072c3e001c699ea4c552d829f0e6ea8","sha1_after":"3edfe7d9f893e399974817757f7e7811d154cfa5","sha256_before":"b48e3fbcadc9a42b1c6c59fde7c1a7eaa8c5c33c262bc1ed8e6eee8b80322c0f","sha256_after":"b46df3cdfb7c4856605f3093bef63302f2460b648f39f82c15870248f0d30daa","uname_after":"root","gname_after":"system","mtime_before":"2023-04-20T07:26:26","mtime_after":"2023-04-20T07:26:54","inode_after":8229,"changed_attributes":["size","mtime","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"} ```
Nodiff :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing /tmp/syscheck-testing/secret ```
In this case, we are going to avoid reporting changes in the `/tmp/syscheck-testing/secret`. The triggered alert does not contain the changes in the file: ``` { "timestamp":"2023-04-20T05:34:10.046CDT", "rule":{ "level":7, "description":"Integrity checksum changed.", "id":"550", "firedtimes":1, "mail":false, "groups":[ "ossec", "syscheck", "syscheck_entry_modified", "syscheck_file" ], "pci_dss":[ "11.5" ], "gpg13":[ "4.11" ], "gdpr":[ "II_5.1.f" ], "hipaa":[ "164.312.c.1", "164.312.c.2" ], "nist_800_53":[ "SI.7" ], "tsc":[ "PI1.4", "PI1.5", "CC6.1", "CC6.8", "CC7.2", "CC7.3" ] }, "agent":{ "id":"000", "name":"vrebollo" }, "manager":{ "name":"vrebollo" }, "id":"1681986850.64663", "full_log":"File '/tmp/syscheck-testing/secret' modified\nMode: scheduled\nChanged attributes: size,mtime,md5,sha1,sha256\nSize changed from '8' to '16'\nOld modification time was: '1681986828', now it is '1681986848'\nOld md5sum was: 'ec4d59b2732f2f153240a8ff746282a6'\nNew md5sum is : '5d3d0a09d02278b66e1cb1246206ce6c'\nOld sha1sum was: 'e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c'\nNew sha1sum is : '9431dff60933b9a80e5cfd0905a76e1704c743b1'\nOld sha256sum was: '41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4'\nNew sha256sum is : '3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d'\n", "syscheck":{ "path":"/tmp/syscheck-testing/secret", "mode":"scheduled", "size_before":"8", "size_after":"16", "perm_after":"rw-r--r--", "uid_after":"0", "gid_after":"0", "md5_before":"ec4d59b2732f2f153240a8ff746282a6", "md5_after":"5d3d0a09d02278b66e1cb1246206ce6c", "sha1_before":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c", "sha1_after":"9431dff60933b9a80e5cfd0905a76e1704c743b1", "sha256_before":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4", "sha256_after":"3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d", "uname_after":"root", "gname_after":"system", "mtime_before":"2023-04-20T05:33:48", "mtime_after":"2023-04-20T05:34:08", "inode_after":8223, "diff":"", "changed_attributes":[ "size", "mtime", "md5", "sha1", "sha256" ], "event":"modified" }, "decoder":{ "name":"syscheck_integrity_changed" }, "location":"syscheck" } ``` In the rest of the files, the diff is working as expected.
Scan Time :yellow_circle:
ossec.conf ``` no 1000 no yes /tmp/syscheck-testing 5:59 ```
Scan frequency is set to: ``` 2023/04/20 05:52:38 wazuh-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 604800 seconds ``` Frequency is set to a week, although, there is no configured a `scan_day`. Is there any option to schedule a scan at a certain hour every day? At the specified time, the syscheck scan started: ``` 2023/04/20 05:59:00 wazuh-syscheckd: INFO: (6008): File integrity monitoring scan started. 2023/04/20 05:59:00 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended. ```
Scan Day :green_circle:
ossec.conf ``` no 1000 no yes /tmp/syscheck-testing thursday ``` The frequency of sycheck scan was configured as expected: ``` 2023/04/20 06:00:36 wazuh-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 604800 seconds 2023/04/20 06:00:36 wazuh-syscheckd: INFO: (6008): File integrity monitoring scan started. ```
Scan on Start :red_circle:
ossec.conf ``` no 1000 no yes /tmp/syscheck-testing ```
The scan is always triggered at the start time: ``` 2023/04/20 05:51:10 wazuh-db: INFO: Started (pid: 4325430). 2023/04/20 05:51:10 wazuh-execd: INFO: Started (pid: 6029412). 2023/04/20 05:51:10 wazuh-maild: INFO: E-Mail notification disabled. Clean Exit. 2023/04/20 05:51:10 wazuh-syscheckd: INFO: Started (pid: 12582930). 2023/04/20 05:51:10 wazuh-syscheckd: INFO: (6003): Monitoring path: '/tmp/syscheck-testing', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/04/20 05:51:11 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/ossec/logs/active-responses.log'. 2023/04/20 05:51:11 wazuh-logcollector: INFO: Monitoring output of command(360): df -P 2023/04/20 05:51:11 wazuh-logcollector: INFO: Monitoring full output of command(360): netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d 2023/04/20 05:51:11 wazuh-logcollector: INFO: Monitoring full output of command(360): last -n 20 2023/04/20 05:51:11 wazuh-logcollector: INFO: Started (pid: 10551310). 2023/04/20 05:51:11 rootcheck: INFO: Starting rootcheck scan. 2023/04/20 05:51:11 wazuh-syscheckd: INFO: (6000): Starting daemon... 2023/04/20 05:51:11 wazuh-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 1000 seconds 2023/04/20 05:51:11 wazuh-syscheckd: INFO: (6008): File integrity monitoring scan started. 2023/04/20 05:51:11 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended. 2023/04/20 05:51:11 wazuh-monitord: INFO: Started (pid: 13041704). 2023/04/20 05:51:11 wazuh-modulesd: INFO: Started (pid: 9568330). 2023/04/20 05:51:11 wazuh-modulesd:agent-upgrade: INFO: (8153): Module Agent Upgrade started. 2023/04/20 05:51:11 wazuh-modulesd:ciscat: INFO: Module disabled. Exiting... 2023/04/20 05:51:11 wazuh-modulesd:osquery: INFO: Module disabled. Exiting... 2023/04/20 05:51:11 wazuh-modulesd:database: INFO: Module started. 2023/04/20 05:51:11 wazuh-modulesd:download: INFO: Module started. 2023/04/20 05:51:11 wazuh-modulesd:task-manager: INFO: (8200): Module Task Manager started. 2023/04/20 05:51:11 wazuh-analysisd: INFO: Total rules enabled: '6327' 2023/04/20 05:51:11 wazuh-analysisd: INFO: Started (pid: 12058856). 2023/04/20 05:51:11 wazuh-db: ERROR: Can't open SQLite database 'var/db/mitre.db': unable to open database file 2023/04/20 05:51:11 wazuh-analysisd: ERROR: Bad response from wazuh-db: Couldn't open DB mitre 2023/04/20 05:51:11 wazuh-analysisd: ERROR: Response from the Mitre database cannot be parsed. 2023/04/20 05:51:11 wazuh-analysisd: ERROR: Mitre matrix information could not be loaded. 2023/04/20 05:51:11 wazuh-analysisd: INFO: (7200): Logtest started 2023/04/20 05:51:23 rootcheck: INFO: Ending rootcheck scan. ```
Frequency :green_circle: Sycheck scan frequency is correctly configured for different values
10 :green_circle: ``` 2023/04/20 06:05:32 wazuh-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 10 seconds --- 2023/04/20 06:05:43 wazuh-syscheckd: INFO: (6008): File integrity monitoring scan started. 2023/04/20 06:05:43 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended. ```
1000 ``` 2023/04/20 06:06:17 wazuh-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 1000 seconds ```
432000 ``` 2023/04/20 06:06:42 wazuh-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 432000 seconds ```
Ignore :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing .log$|.swp$ /tmp/syscheck-testing/ignore_file ```
Three files were updated: ``` bash-5.1# echo "Testing" >> /tmp/syscheck-testing/file0 bash-5.1# echo "Testing" >> /tmp/syscheck-testing/file0.log bash-5.1# echo "Testing" >> /tmp/syscheck-testing/ignore_file ``` As expected, only the `file0` triggered an alert: ``` {"timestamp":"2023-04-20T06:09:33.201CDT","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681988973.131255","full_log":"File '/tmp/syscheck-testing/file0' modified\nMode: scheduled\nChanged attributes: size,mtime,md5,sha1,sha256\nSize changed from '8' to '16'\nOld modification time was: '1681988958', now it is '1681988970'\nOld md5sum was: 'ec4d59b2732f2f153240a8ff746282a6'\nNew md5sum is : '5d3d0a09d02278b66e1cb1246206ce6c'\nOld sha1sum was: 'e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c'\nNew sha1sum is : '9431dff60933b9a80e5cfd0905a76e1704c743b1'\nOld sha256sum was: '41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4'\nNew sha256sum is : '3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d'\n","syscheck":{"path":"/tmp/syscheck-testing/file0","mode":"scheduled","size_before":"8","size_after":"16","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_before":"ec4d59b2732f2f153240a8ff746282a6","md5_after":"5d3d0a09d02278b66e1cb1246206ce6c","sha1_before":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c","sha1_after":"9431dff60933b9a80e5cfd0905a76e1704c743b1","sha256_before":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4","sha256_after":"3b9746fd2685c657f5526719e3682d5785050790ed0e42e40395d395c7eeef2d","uname_after":"root","gname_after":"system","mtime_before":"2023-04-20T06:09:18","mtime_after":"2023-04-20T06:09:30","inode_after":8224,"changed_attributes":["size","mtime","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"} ```
Whodata :black_circle:
ossec.conf ``` no 10 yes no /tmp/syscheck-testing ```
Not supported by AIX manager. The expected warning is created: ``` 2023/04/20 06:58:54 wazuh-syscheckd: WARNING: (6333): Whodata monitoring request on unsupported system. ```
Realtime :black_circle:
ossec.conf ``` no 10 yes no /tmp/syscheck-testing ```
Not supported by AIX manager. The expected warning is created ``` 2023/04/20 06:56:12 wazuh-syscheckd: WARNING: (6908): Ignoring flag for real time monitoring on directory: '/tmp/syscheck-testing'. ```
Max EPS :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing 5 ```
To check easily the max eps of the syscheck scan we are going to follow these steps: - Edit 50 files continuously using the following command: ``` while true;do for i in {1..50}; do echo "Testing" >> /tmp/syscheck-testing/file${i}; done; done ``` - Monitor the number of lines of the alerts.json file using this command: ``` while true; do wc -l /var/ossec/logs/alerts/alerts.json; sleep 1; done; ``` - Check that at scan time, the number of alerts generated is 5 or less. ``` 528 /var/ossec/logs/alerts/alerts.json 532 /var/ossec/logs/alerts/alerts.json 537 /var/ossec/logs/alerts/alerts.json 538 /var/ossec/logs/alerts/alerts.json 543 /var/ossec/logs/alerts/alerts.json 548 /var/ossec/logs/alerts/alerts.json 553 /var/ossec/logs/alerts/alerts.json 558 /var/ossec/logs/alerts/alerts.json 563 /var/ossec/logs/alerts/alerts.json 568 /var/ossec/logs/alerts/alerts.json 573 /var/ossec/logs/alerts/alerts.json 578 /var/ossec/logs/alerts/alerts.json ``` > Note > More precise analysis of the performance testing issue: https://github.com/wazuh/wazuh-qa/issues/4052
Max Files per second :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing 5 ```
In order to check easily the max eps of the syscheck scan we are going to follow these steps: - Edit 50 files continuously using the following command: ``` while true;do for i in {1..50}; do echo "Testing" >> /tmp/syscheck-testing/file${i}; done; done ``` - Monitor the number of lines of the alerts.json file using this command: ``` while true; do wc -l /var/ossec/logs/alerts/alerts.json; sleep 1; done; ``` - Check that at scan time, the number of alerts generated is 5 or less. ``` 1136 /var/ossec/logs/alerts/alerts.json 1136 /var/ossec/logs/alerts/alerts.json 1136 /var/ossec/logs/alerts/alerts.json 1140 /var/ossec/logs/alerts/alerts.json 1149 /var/ossec/logs/alerts/alerts.json 1155 /var/ossec/logs/alerts/alerts.json 1161 /var/ossec/logs/alerts/alerts.json 1167 /var/ossec/logs/alerts/alerts.json 1173 /var/ossec/logs/alerts/alerts.json 1179 /var/ossec/logs/alerts/alerts.json 1183 /var/ossec/logs/alerts/alerts.json 1186 /var/ossec/logs/alerts/alerts.json 1186 /var/ossec/logs/alerts/alerts.json 1186 /var/ossec/logs/alerts/alerts.json ``` > Note > More precise analysis in the performance testing issue: https://github.com/wazuh/wazuh-qa/issues/4052
File Limit :green_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing yes 20 ```
After reaching the file limit the following warning is generated: ``` 2023/04/20 06:24:25 wazuh-syscheckd: WARNING: (6927): Sending DB 100% full alert. ``` In this case, file11 is correctly monitored, although, other files were ignored: ``` {"timestamp":"2023-04-20T06:27:15.617CDT","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","firedtimes":3,"mail":false,"groups":["ossec","syscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681990035.1548705","full_log":"File '/tmp/syscheck-testing/file11' modified\nMode: scheduled\nChanged attributes: size,mtime,md5,sha1,sha256\nSize changed from '2160248' to '2160256'\nOld modification time was: '1681990022', now it is '1681990034'\nOld md5sum was: 'f4f3dff3f688eb4ac33c47159690076f'\nNew md5sum is : '25bf270cb68a0261fe957a0f38ef9bfe'\nOld sha1sum was: 'b2c98330a4dc28dae12ca1d5c79d813bb4b3e9b7'\nNew sha1sum is : '055d9ad9c67f81d6cc955095387b154dde5a67c4'\nOld sha256sum was: 'ad1fdb1cda83b6b97fb853019c07ac761e25e848446ef51dd3d0516547b5f2d5'\nNew sha256sum is : '02370b468c00bb09d025f056cdcc666fbdf57360a8b00d70f45bae32f976fcdb'\n","syscheck":{"path":"/tmp/syscheck-testing/file11","mode":"scheduled","size_before":"2160248","size_after":"2160256","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_before":"f4f3dff3f688eb4ac33c47159690076f","md5_after":"25bf270cb68a0261fe957a0f38ef9bfe","sha1_before":"b2c98330a4dc28dae12ca1d5c79d813bb4b3e9b7","sha1_after":"055d9ad9c67f81d6cc955095387b154dde5a67c4","sha256_before":"ad1fdb1cda83b6b97fb853019c07ac761e25e848446ef51dd3d0516547b5f2d5","sha256_after":"02370b468c00bb09d025f056cdcc666fbdf57360a8b00d70f45bae32f976fcdb","uname_after":"root","gname_after":"system","mtime_before":"2023-04-20T06:27:02","mtime_after":"2023-04-20T06:27:14","inode_after":8237,"changed_attributes":["size","mtime","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"} ```
Auto ignore :yellow_circle:
ossec.conf ``` no 10 yes yes /tmp/syscheck-testing yes ```
There is no documentation about this option on the syscheck documentation page: https://documentation.wazuh.com/4.3/user-manual/reference/ossec-conf/syscheck.html#ignore No testing was performed due to this option only applies to realtime monitoring, which is not supported by AIX manager.
Alert New files :red_circle:
ossec.conf ``` no 10 yes no /tmp/syscheck-testing ```
It is created a new file in the system, named ` /tmp/syscheck-testing/file_new`: `echo "Testing" >> /tmp/syscheck-testing/file_new` Even disabling new files alerts, it was triggered the `File added to the system` alert: ``` {"timestamp":"2023-04-20T06:41:02.521CDT","rule":{"level":5,"description":"File added to the system.","id":"554","firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_added","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1681990862.1576315","full_log":"File '/tmp/syscheck-testing/file_new' added\nMode: scheduled\n","syscheck":{"path":"/tmp/syscheck-testing/file_new","mode":"scheduled","size_after":"8","perm_after":"rw-r--r--","uid_after":"0","gid_after":"0","md5_after":"ec4d59b2732f2f153240a8ff746282a6","sha1_after":"e9f9b899cc7161e1eb0b1e4c042fdfabccf7958c","sha256_after":"41920b348e0c6ff2ef9b7e3ee9308726aa5250fa717883e073ff6a936a9325a4","uname_after":"root","gname_after":"system","mtime_after":"2023-04-20T06:40:51","inode_after":8280,"event":"added"},"decoder":{"name":"syscheck_new_entry"},"location":"syscheck"} ```
File Update during downtime :red_circle:
ossec.conf ``` no 20 yes yes /tmp/syscheck-testing ```
When a scheduled scan is used, If a monitored file is changed before the next scan and the manager is restarted or stopped, the expected `Integrity checksum changed` alert is not generated in the next syscheck scan. This behavour can be seen in the indexed video: [syscheck_bug.webm](https://user-images.githubusercontent.com/11089305/233406115-6deccf87-56c0-4a0a-ade7-016d38af2021.webm) The process to replicate the issue would be the following: - Create a file in a monitored directory - Append a first message in the file: `echo "First message" >> /tmp/syscheck-testing/file1` - Check the expected alert was triggered - Append a second message in the file: `echo "Second message" >> /tmp/syscheck-testing/file1` - Before the scan start again, restart the manager - Check that, the file change is not detected, neither the first or the second scan. - Append a third message in the file: `echo "Last message" >> /tmp/syscheck-testing/file1` - Check that alert is correctly generated

Logcollector

Prerequisites **Decoders** ``` ^example example User '(\w+)' logged from '(\d+.\d+.\d+.\d+)' user, srcip ``` **Rules** ``` example User logged json example JSON TESTING json label-value JSON TESTING ``` **Testing event** ``` 2023/04/20 08:36:18 wazuh-testrule: INFO: Started (pid: 12845152). Since Wazuh v4.1.0 this binary is deprecated. Use wazuh-logtest instead wazuh-testrule: Type one log per line. Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100' **Phase 1: Completed pre-decoding. full event: 'Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'' timestamp: 'Dec 25 20:45:02' hostname: 'MyHost' program_name: 'example' log: 'User 'admin' logged from '192.168.1.100'' **Phase 2: Completed decoding. decoder: 'example' dstuser: 'admin' srcip: '192.168.1.100' **Phase 3: Completed filtering (rules). Rule id: '100010' Level: '10' Description: 'User logged' **Alert to be generated. ```
Basic syslog file monitoring :green_circle:
ossec.conf ``` syslog /tmp/example.log ```
Using the testing, the expected alert is correctly generated ``` ** Alert 1681997917.1941333: - local,syslog,sshd, 2023 Apr 20 08:38:37 MyHost->/tmp/example.log Rule: 100010 (level 10) -> 'User logged' Src IP: 192.168.1.100 User: admin Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100' ```
Syslog file date wildcard :green_circle:
ossec.conf ``` syslog /tmp/example.log-%Y-%m-%d ```
After creating the file `/tmp/example.log-2023-04-20`, and restarting the manager, the expected alert was generated: ``` ** Alert 1681998068.1944332: - local,syslog,sshd, 2023 Apr 20 08:41:08 MyHost->/tmp/example.log-2023-04-20 Rule: 100010 (level 10) -> 'User logged' Src IP: 192.168.1.100 User: admin Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100 ```
Syslog File * wildcard for files :green_circle:
ossec.conf ``` syslog /tmp/*.log ```
If we create a file that matches the specified regex, for example `/tmp/example2.log`, and we wait for the `logcollector.vcheck_files` seconds (default 64), the file will be monitored. And in case we use the testing log, the expected alert will be generated: ``` ** Alert 1681998340.1947838: - local,syslog,sshd, 2023 Apr 20 08:45:40 MyHost->/tmp/example2.log Rule: 100010 (level 10) -> 'User logged' Src IP: 192.168.1.100 User: admin Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100' ```
Syslog File * wildcard for directories :green_circle:
ossec.conf ``` syslog /tmp/testing-*/*.log ```
If we create some directories and files that match the specified regex, for example `/tmp/testing-1/file1.log` and `/tmp/testing-2/file1.log`, and we wait the `logcollector.vcheck_files` seconds (default 64), the files will be monitored. And, in case we use the testing log, the expected alerts will be generated: ``` ** Alert 1681998608.1951086: - local,syslog,sshd, 2023 Apr 20 08:50:08 MyHost->/tmp/testing-2/file1.log Rule: 100010 (level 10) -> 'User logged' Src IP: 192.168.1.100 User: admin Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100' ** Alert 1681998624.1951346: - local,syslog,sshd, 2023 Apr 20 08:50:24 MyHost->/tmp/testing-1/file1.log Rule: 100010 (level 10) -> 'User logged' Src IP: 192.168.1.100 User: admin Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100' ```
Command monitoring with arguments :green_circle:
ossec.conf ``` command 30 ls -la / ```
After enabling the `logall` option, we can check in the `/var/ossec/logs/archives/archives.log` that the command is correctly monitored: ``` 2023 Apr 20 09:05:36 vrebollo->ls -la / ossec: output: 'ls -la /': total 7816 2023 Apr 20 09:05:36 vrebollo->ls -la / ossec: output: 'ls -la /': drwxr-xr-x 21 root system 4096 Apr 20 08:32 . 2023 Apr 20 09:05:36 vrebollo->ls -la / ossec: output: 'ls -la /': drwxr-xr-x 21 root system 4096 Apr 20 08:32 .. 2023 Apr 20 09:05:36 vrebollo->ls -la / ossec: output: 'ls -la /': -rw------- 1 root system 8696 Apr 20 08:32 .bash_history 2023 Apr 20 09:05:36 vrebollo->ls -la / ossec: output: 'ls -la /': -rw------- 1 root system 14 Jan 27 05:18 .python_history 2023 Apr 20 09:05:36 vrebollo->ls -la / ossec: output: 'ls -la /': -rw------- 1 root system 1024 Apr 18 04:38 .rnd 2023 Apr 20 09:05:36 vrebollo->ls -la / ossec: output: 'ls -la /': -rw------- 1 root system 708 Apr 20 08:12 .sh_history 2023 Apr 20 09:05:36 vrebollo->ls -la / ossec: output: 'ls -la /': drwx------ 2 root system 256 Jan 23 06:24 .ssh 2023 Apr 20 09:05:36 vrebollo->ls -la / ossec: output: 'ls -la /': -rw------- 1 root system 362 Apr 20 09:02 .vi_history 2023 Apr 20 09:05:36 vrebollo->ls -la / ossec: output: 'ls -la /': -rw-r--r-- 1 root system 263 Jan 27 05:38 .wget-hsts ```
Command monitoring without arguments :green_circle:
ossec.conf ``` command 30 df ```
After enabling the `logall` option, we can check in the `/var/ossec/logs/archives/archives.log` that the command is correctly monitored: ``` 2023 Apr 20 09:08:06 vrebollo->df ossec: output: 'df ': /dev/hd4 1441792 45656 97% 18530 75% / 2023 Apr 20 09:08:06 vrebollo->df ossec: output: 'df ': Filesystem 512-blocks Free %Used Iused %Iused Mounted on 2023 Apr 20 09:08:06 vrebollo->df ossec: output: 'df ': /dev/hd2 6553600 2279544 66% 40793 14% /usr 2023 Apr 20 09:08:06 vrebollo->df ossec: output: 'df ': /dev/hd9var 2490368 407304 84% 7980 10% /var 2023 Apr 20 09:08:06 vrebollo->df ossec: output: 'df ': /dev/hd3 1048576 799288 24% 149 1% /tmp .... ```
JSON basic file :green_circle:
ossec.conf ``` json /tmp/testing.json ```
The file is correctly monitored: ``` 2023/04/20 09:09:48 wazuh-logcollector: INFO: (1950): Analyzing file: '/tmp/testing.json'. ``` The expected alert was correctly generated: ``` ** Alert 1681999888.1964189: - local,syslog,sshd, 2023 Apr 20 09:11:28 vrebollo->/tmp/testing.json Rule: 100510 (level 10) -> 'JSON TESTING' {"testing":"example"} testing: example ```
JSON file using the label option :green_circle:
ossec.conf ``` json /tmp/testing.json ```
The file is correctly monitored: ``` 2023/04/20 09:12:53 wazuh-logcollector: INFO: (1950): Analyzing file: '/tmp/testing.json'. ``` The expected alert was correctly generated: ``` ** Alert 1681999988.1967365: - local,syslog,sshd, 2023 Apr 20 09:13:08 vrebollo->/tmp/testing.json Rule: 100510 (level 10) -> 'JSON TESTING' {"testing":"example","testing-label":"label-value"} testing: example testing-label: label-value ``` Also, the testing-label value was included correctly.
Multiline format :green_circle:
ossec.conf ``` /tmp/example-multiline.log multi-line: 5 ```
If we Insert in the `example-multiline.log` file the following events: ``` Aug 9 14:22:47 hostname log line one Aug 9 14:22:47 hostname log line two Aug 9 14:22:47 hostname log line four Aug 9 14:22:47 hostname log line three Aug 9 14:22:47 hostname log line five ``` We can check in the archives.log that all lines have been gathered in one line: ``` 2023 Apr 20 09:15:26 vrebollo->/tmp/example-multiline.log Aug 9 14:22:47 hostname log line one Aug 9 14:22:47 hostname log line two Aug 9 14:22:47 hostname log line four Aug 9 14:22:47 hostname log line three Aug 9 14:22:47 hostname log line five ```
VCheck files :green_circle:
ossec.conf ``` syslog /tmp/testing-non-existing.json ```
local_internal_options ``` logcollector.vcheck_files=2 logcollector.open_attempts=298 logcollector.debug=2 ```
We can check in the ossec.log that logcollector is trying to open each 2 seconds the unexisting file `/tmp/testing-non-existing.json`: ``` 2023/04/20 09:22:31 wazuh-logcollector[11141346] logcollector.c:1097 at handle_file(): DEBUG: (1962): Unable to open file '/tmp/testing-non-existing.json'. Remaining attempts: 292 2023/04/20 09:22:33 wazuh-logcollector[11141346] logcollector.c:485 at LogCollectorStart(): DEBUG: Performing file check. 2023/04/20 09:22:33 wazuh-logcollector[11141346] logcollector.c:1097 at handle_file(): DEBUG: (1962): Unable to open file '/tmp/testing-non-existing.json'. Remaining attempts: 291 2023/04/20 09:22:35 wazuh-logcollector[11141346] logcollector.c:485 at LogCollectorStart(): DEBUG: Performing file check. 2023/04/20 09:22:35 wazuh-logcollector[11141346] logcollector.c:1097 at handle_file(): DEBUG: (1962): Unable to open file '/tmp/testing-non-existing.json'. Remaining attempts: 290 ``` Once the file is created, it is monitored correctly: ``` ** Alert 1682000649.1978139: - local,syslog,sshd, 2023 Apr 20 09:24:09 vrebollo->/tmp/testing-non-existing.json Rule: 100510 (level 10) -> 'JSON TESTING' {"testing": "example"} testing: example ```

CSyslog

Basic alert forward :green_circle:
ossec.conf ``` 127.0.0.1 7 1099 syslog /tmp/testing.log ```
We are going to use the QA `SyslogServer` tool to gather forwarded events: - First, we initiate the Syslog server simulator ``` bash-5.1# /opt/freeware/bin/python3 Python 3.7.12 (default, Dec 15 2021, 03:25:47) [GCC 8.3.0] on aix6 Type "help", "copyright", "credits" or "license" for more information. >>> from syslog_server import SyslogServer >>> syslog_server = SyslogServer(protocol='udp', port=1099) >>> syslog_server.start() 2023-04-20 09:44:34,144 SyslogServer INFO Starting syslog server ``` - Then, we append the testing event to the monitored file a few times to generate multiple alerts: `echo "Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'" >> /tmp/testing-non-existing.json ` - Finally, we check the total messages received by the syslog server simulator and ensure that it matches with the number of events, in our case 8: ``` >>> syslog_server.get_total_messages() 8 >>> syslog_server.shutdown() 2023-04-20 09:44:55,218 SyslogServer INFO Shutting down server >>> ```

Fluentd - Logcollector Events :green_circle:

Prerequisites
Fluentd Installation (Credits to @Leoquicenoz ) **Install Fluentd in a Ubuntu host**: ``` curl -fsSL https://toolbelt.treasuredata.com/sh/install-ubuntu-jammy-td-agent4.sh | sh ``` **Fluentd Configuration** ``` #### ## Output descriptions: ## # Treasure Data (http://www.treasure-data.com/) provides cloud based data # analytics platform, which easily stores and processes data from td-agent. # FREE plan is also provided. # @see http://docs.fluentd.org/articles/http-to-td # # This section matches events whose tag is td.DATABASE.TABLE @type tdlog @id output_td apikey YOUR_API_KEY auto_create_table @type file path /var/log/td-agent/buffer/td @type file path /var/log/td-agent/failed_records ## match tag=debug.** and dump to console @type stdout @id output_stdout #### ## Source descriptions: ## ## built-in TCP input ## @see http://docs.fluentd.org/articles/in_forward @type forward @id input_forward bind 172.31.25.129 port 1515 ## built-in UNIX socket input # # type unix # # HTTP input # POST http://localhost:8888/?json= # POST http://localhost:8888/td.myapp.login?json={"user"%3A"me"} # @see http://docs.fluentd.org/articles/in_http ## live debugging agent @type debug_agent @id input_debug_agent bind 127.0.0.1 port 24230 #### ## Examples: ## ## File input ## read apache logs continuously and tags td.apache.access # # @type tail # @id input_tail # # @type apache2 # # path /var/log/httpd-access.log # tag td.apache.access # ## File output ## match tag=local.** and write to file # # @type file # @id output_file # path /var/log/td-agent/access # ## Forwarding ## match tag=system.** and forward to another td-agent server # # @type forward # @id output_system_forward # # # host 192.168.0.11 # # # secondary host is optional # # # host 192.168.0.12 # # # ## Multiple output ## match tag=td.*.* and output to Treasure Data AND file # # @type copy # @id output_copy # # @type tdlog # apikey API_KEY # auto_create_table # # @type file # path /var/log/td-agent/buffer/td # # # # @type file # path /var/log/td-agent/td-%Y-%m-%d/%H.log # # ```
AIX manager configuration ``` yes /var/run/fluent.sock
3.80.51.223
1515 debug.test
fluent_socket /var/run/fluent.sock udp ```
Fluentd Forward - Syslog Event :green_circle: - We generated an event in the `/tmp/testing.log` file: ``` echo "Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'" >> /tmp/testing.log ``` - Check if the event is forwarded to fluentd ``` 2023-04-24 16:57:53.000000000 +0000 debug.test: {"message":"Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'"} 2023-04-24 16:58:01.000000000 +0000 debug.test: {"message":"Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'"} ```
Fluentd Forward - JSON Event :green_circle: - We generated an event in the `/tmp/testing.json` file: ``` echo '{"testing": "example"}' >> /tmp/testing.json ``` - Check if the event is forwarded to fluentd ``` 2023-04-24 17:03:13.000000000 +0000 debug.test: {"message":"{\"testing\":\"example\"}"} ```

Fluentd - Alerts :red_circle:

Prerequisites
Installation Fluentd alerts forward support is not included in the same branch. For this testing we are going to use a local source instalation using `16351_alert_output_fluentd` branch. At installation time same errors appears as in branch `16392-aix-build-fix`: ``` [-G group] [-S] [-n dirc] [-o] [-s] file [dirx ...] gmake[1]: *** [Makefile:1313: altbininstall] Error 2 gmake[1]: Leaving directory '/opt/wazuh_new/wazuh/src/external/cpython' gmake: *** [Makefile:2134: install_python] Error 2 install: File root was not found. gmake: *** [Makefile:31: install] Error 2 install: File root was not found. gmake: *** [Makefile:29: install] Error 2 - System is AIX. - Init script modified to start Wazuh during boot. ln: /etc/rc.d/rc2.d/S97wazuh-local exists. Specify -f to remove. ln: /etc/rc.d/rc3.d/S97wazuh-local exists. Specify -f to remove. Starting Wazuh... local Starting Wazuh v4.3.11... ```
Configuration It is proposed the following configuration ``` fluent_socket ... yes debug.test var/run/fluent.sock
54.237.231.151
1515
fluent_socket var/run/fluent.sock udp syslog /tmp/testing_alerts.log fluent_socket syslog /tmp/testing_alerts2.log ``` By default, all alerts will be forwarded to the fluentd server. In addition, all events produced in the `/tmp/testing_alerts2.log` file will be forwarded to the fluentd. In this case, no related alert for the event will be triggered.
Fluentd Server **Install Fluentd in a Ubuntu host**: ``` curl -fsSL https://toolbelt.treasuredata.com/sh/install-ubuntu-jammy-td-agent4.sh | sh ``` **Fluentd Configuration** ```xml #### ## Output descriptions: ## # Treasure Data (http://www.treasure-data.com/) provides cloud based data # analytics platform, which easily stores and processes data from td-agent. # FREE plan is also provided. # @see http://docs.fluentd.org/articles/http-to-td # # This section matches events whose tag is td.DATABASE.TABLE @type tdlog @id output_td apikey YOUR_API_KEY auto_create_table @type file path /var/log/td-agent/buffer/td @type file path /var/log/td-agent/failed_records ## match tag=debug.** and dump to console @type stdout @id output_stdout #### ## Source descriptions: ## ## built-in TCP input ## @see http://docs.fluentd.org/articles/in_forward @type forward @id input_forward bind 172.31.25.129 port 1515 # HTTP input # POST http://localhost:8888/?json= # POST http://localhost:8888/td.myapp.login?json={"user"%3A"me"} # @see http://docs.fluentd.org/articles/in_http ## live debugging agent @type debug_agent @id input_debug_agent bind 127.0.0.1 port 24230 ```
Basic alert forward :green_circle: - **Produce an event in the testing alert file**: `Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'"` - **Check that alert was triggered** ``` {"timestamp":"2023-04-26T04:57:45.684CDT","rule":{"level":10,"description":"User logged","id":"100010","firedtimes":8375,"mail":false,"groups":["local","syslog","sshd"]},"agent":{"id":"000","name":"vrebollo"},"manager":{"name":"vrebollo"},"id":"1682503065.2194906","full_log":"Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'","predecoder":{"program_name":"example","timestamp":"Dec 25 20:45:02","hostname":"MyHost"},"decoder":{"name":"example"},"data":{"srcip":"192.168.1.100","dstuser":"admin"},"location":"/tmp/testing_alerts2.log"} ``` - Check that alert was correctly forwarded to fluentd ``` 2023-04-26 09:58:57.000000000 +0000 debug.test: {"message":"{\"timestamp\":\"2023-04-26T04:58:57.697CDT\",\"rule\":{\"level\":10,\"description\":\"User logged\",\"id\":\"100010\",\"firedtimes\":8376,\"mail\":false,\"groups\":[\"local\",\"syslog\",\"sshd\"]},\"agent\":{\"id\":\"000\",\"name\":\"vrebollo\"},\"manager\":{\"name\":\"vrebollo\"},\"id\":\"1682503137.2195166\",\"full_log\":\"Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'\",\"predecoder\":{\"program_name\":\"example\",\"timestamp\":\"Dec 25 20:45:02\",\"hostname\":\"MyHost\"},\"decoder\":{\"name\":\"example\"},\"data\":{\"srcip\":\"192.168.1.100\",\"dstuser\":\"admin\"},\"location\":\"/tmp/testing_alerts2.log\"}"} ```
Logcollector event and alert simultaneous forwarding :red_circle: In case of using a global and a logcollector fluentd event fluentd forwarding configuration, only the events are forwarded avoiding alert generation. - **Generate an event in the `/tmp/testing_alerts2.log`** - **Generate an event in the `/tmp/testing_alerts.log`** - **Check that the `testing_alerts2` triggered alerts was forwarded**: ``` 2023-04-26 10:22:21.000000000 +0000 debug.test: {"message":"{\"timestamp\":\"2023-04-26T05:22:21.850CDT\",\"rule\":{\"level\":10,\"description\":\"User logged\",\"id\":\"100010\",\"firedtimes\":6482,\"mail\":false,\"groups\":[\"local\",\"syslog\",\"sshd\"]},\"agent\":{\"id\":\"000\",\"name\":\"vrebollo\"},\"manager\":{\"name\":\"vrebollo\"},\"id\":\"1682504541.4484206\",\"full_log\":\"Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'\",\"predecoder\":{\"program_name\":\"example\",\"timestamp\":\"Dec 25 20:45:02\",\"hostname\":\"MyHost\"},\"decoder\":{\"name\":\"example\"},\"data\":{\"srcip\":\"192.168.1.100\",\"dstuser\":\"admin\"},\"location\":\"/tmp/testing_alerts2.log\"}"} ``` - **Check that the `testing_alerts1` event was forwarded but no alert was triggered** ``` 2023-04-26 10:22:51.000000000 +0000 debug.test: {"message":"Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'"} ```
Multiple alert forward :red_circle: Some unexpected errors appears when multiple alerts were forwarded: ``` 2023/04/26 04:59:51 wazuh-analysisd: ERROR: Cannot send message to socket 'fluent_socket'. (Abort). ``` *** For this check we are going to produce 5000 events in a timeframe of 10 second (500 EPS). - **Generate events** In order to generate the events for this check we are going to use this simple bash script: ``` for i in {1..10}; do for j in {1..500}; do echo "Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'" >> /tmp/testing_alerts2.log done; sleep 1 done; ``` - **Check that multiple errors appears in the local logs** ``` 2023/04/26 04:59:51 wazuh-analysisd: ERROR: Cannot send message to socket 'fluent_socket'. (Abort). 2023/04/26 04:59:51 wazuh-analysisd: ERROR: Cannot send message to socket 'fluent_socket'. (Abort). 2023/04/26 04:59:51 wazuh-analysisd: ERROR: Cannot send message to socket 'fluent_socket'. (Abort). 2023/04/26 04:59:51 wazuh-analysisd: ERROR: Cannot send message to socket 'fluent_socket'. (Abort). 2023/04/26 04:59:51 wazuh-analysisd: ERROR: Cannot send message to socket 'fluent_socket'. (Abort). ``` - **Check that multiple alerts were lost** In this case only 2368 alerts were triggered - **Check that not all triggered alerts were received by the fluentd server** In this case only 2240 were received Further research is required. In deeph analysis will be performed in https://github.com/wazuh/wazuh-qa/issues/4052
Rebits commented 1 year ago

Multiple errors were detected during installation :red_circle:

Sources

The following errors appear at manager installation time,

Packages

The following errors appears at installation time:

rpm_share: 0645-024 Unable to access directory /var/ossec/queue/vulnerabilities
rpm_share: 0645-007 ATTENTION: update_dir() returned an unexpected result.
rpm_share: 0645-007 ATTENTION: update_inst_root() returned an unexpected result.

For more information check Wazuh Local installation in this comment

Rebits commented 1 year ago

MITRE database is not working :red_circle:

As it has already reported the MITRE database was not correctly created:

023/04/19 10:55:14 wazuh-db: ERROR: Can't open SQLite database 'var/db/mitre.db': unable to open database file
2023/04/19 10:55:14 wazuh-analysisd: ERROR: Bad response from wazuh-db: Couldn't open DB mitre
2023/04/19 10:55:14 wazuh-analysisd: ERROR: Response from the Mitre database cannot be parsed.
2023/04/19 10:55:14 wazuh-analysisd: ERROR: Mitre matrix information could not be loaded.

This condition prevent the mitre data of being included in the alerts. For example, in case of file update, the generated alert should include the following information:

      "mitre":{
         "id":[
            "T1565.001"
         ],
         "tactic":[
            "Impact"
         ],
         "technique":[
            "Stored Data Manipulation"
         ]
      },

However, in AIX manager, this information is not included, and the following warning is produced in the manager 's logs:

2023/04/20 03:48:02 wazuh-analysisd: WARNING: Mitre Technique ID 'T1565.001' not found in database.
Rebits commented 1 year ago

Alerts new file option is always enabled :yellow_circle:

Note This issue is present in 4.4 version for Linux managers

The alert new files option is not working correctly. Even if we disable this option, it is generating alerts in case of new files are added to the system.

Steps to reproduce

Attributes:

{
   "data":{
      "affected_items":[
         {
            "syscheck":{
               "disabled":"no",
               "frequency":10,
               "skip_nfs":"yes",
               "skip_dev":"yes",
               "skip_sys":"yes",
               "skip_proc":"yes",
               "scan_on_start":"yes",
               "file_limit":{
                  "enabled":"yes",
                  "entries":100000
               },
               "diff":{
                  "disk_quota":{
                     "enabled":"yes",
                     "limit":1048576
                  },
                  "file_size":{
                     "enabled":"yes",
                     "limit":51200
                  }
               },
               "directories":[
                  {
                     "opts":[
                        "check_md5sum",
                        "check_sha1sum",
                        "check_perm",
                        "check_size",
                        "check_owner",
                        "check_group",
                        "check_mtime",
                        "check_inode",
                        "check_sha256sum"
                     ],
                     "dir":"/tmp/testing",
                     "recursion_level":256,
                     "diff_size_limit":51200
                  }
               ],
               "whodata":{
                  "restart_audit":"yes",
                  "startup_healthcheck":"yes"
               },
               "allow_remote_prefilter_cmd":"no",
               "synchronization":{
                  "enabled":"yes",
                  "max_interval":3600,
                  "interval":300,
                  "response_timeout":30,
                  "queue_size":16384,
                  "max_eps":10
               },
               "max_eps":100,
               "process_priority":10,
               "database":"disk"
            }
         }
      ],
      "total_affected_items":1,
      "total_failed_items":0,
      "failed_items":[

      ]
   },
   "message":"Active configuration was successfully read",
   "error":0
}

We can see that no alert_new_files configuration is returned.

Rebits commented 1 year ago

Auto Ignore option is not included in the documentation :yellow_circle:

Sycheck documentation does not include any information about the auto_ignore option.

Rebits commented 1 year ago

Scan on start is always enabled :yellow_circle:

Note This issue is present in the 4.4 version for Linux managers

The scan_on_start option is not working correctly. A syscheck scan is always initiated at the start time even if we disable this option.

Steps to reproduce


- Check that scan on start is set to `no` in the configuration through the API

{"data": {"affected_items": [{"syscheck": {"disabled": "no", "frequency": 10, "skip_nfs": "yes", "skip_dev": "yes", "skip_sys": "yes", "skip_proc": "yes", "scan_on_start": "no", "file_limit": {"enabled": "yes", "entries": 100000}, "diff": {"disk_quota": {"enabled": "yes", "limit": 1048576}, "file_size": {"enabled": "yes", "limit": 51200}}, "directories": [{"opts": ["check_md5sum", "check_sha1sum", "check_perm", "check_size", "check_owner", "check_group", "check_mtime", "check_inode", "check_sha256sum"], "dir": "/tmp/testing", "recursion_level": 256, "diff_size_limit": 51200}], "whodata": {"restart_audit": "yes", "startup_healthcheck": "yes"}, "allow_remote_prefilter_cmd": "no", "synchronization": {"enabled": "yes", "max_interval": 3600, "interval": 300, "response_timeout": 30, "queue_size": 16384, "max_eps": 10}, "max_eps": 100, "process_priority": 10, "database": "disk"}}], "total_affected_items": 1, "total_failed_items": 0, "failed_items": []}, "message": "Active configuration was successfully read", "error": 0}root@ubuntu:/home/vagrant#

Rebits commented 1 year ago

Scan time option is not working as expected :yellow_circle:

Note This issue is present in the 4.4 version for Linux managers

If we use the scan time option without using the scan_day, it set the frequency to a week instead of a day

Steps to reproduce

If we enable only the scan time option we expect that each day, the scan will start at specified hour instead of only start the scan at the weekday which the manager was started.

Rebits commented 1 year ago

If the manager is restarted or stopped before the syscheck scheduled scan starts, all the expected alerts during the interval will be lost :yellow_circle:

ossec.conf ``` no 20 yes yes /tmp/syscheck-testing ```

When a scheduled scan is used, If a monitored file is changed before the next scan and the manager is restarted or stopped, the expected Integrity checksum changed alert is not generated in the next syscheck scan.

This behavior can be seen in the indexed video:

syscheck_bug.webm

The process to replicate the issue would be the following:

Rebits commented 1 year ago

Scan on start option is not included in the documentation :yellow_circle:

Sycheck documentation does not include any information about the scan_on_start option.

Rebits commented 1 year ago

Wazuh local upgrade fails :yellow_circle:

Upgrade Wazuh local instance fails by source and packages

Sources

Updating the manager by source fails. Updating process does not detect that the current installation mode is local and tries to install the manager, making the process fails:

Pgsql settings:
    includes:           
    libs:               
Defines:
    -DOSSECHIDS -DUSER="wazuh" -DGROUPGLOBAL="wazuh" -DAIX -DAIX -D__unix -D_LINUX_SOURCE_COMPAT -DHIGHFIRST -DENABLE_SYSC -DENABLE_CISCAT
Compiler:
    CFLAGS            -pthread -DNDEBUG -O2 -DOSSECHIDS -DUSER="wazuh" -DGROUPGLOBAL="wazuh" -DAIX -DAIX -D__unix -D_LINUX_SOURCE_COMPAT -DHIGHFIRST -DENABLE_SYSC -DENABLE_CISCAT -pipe -Wall -Wextra -std=gnu99 -I./ -I./headers/ -Iexternal/openssl/include -Iexternal/cJSON/ -Iexternal/libyaml/include -Iexternal/curl/include -Iexternal/msgpack/include -Iexternal/bzip2/ -Ishared_modules/common -Ishared_modules/dbsync/include -Ishared_modules/rsync/include -Iwazuh_modules/syscollector/include  -Idata_provider/include  -Iexternal/libpcre2/include -I/builddir/output/include 
    LDFLAGS           -pthread -L./lib '-Wl,-blibpath:/var/ossec/lib:/usr/lib:/lib' -O2 -Lshared_modules/dbsync/build/lib -Lshared_modules/rsync/build/lib  -Lwazuh_modules/syscollector/build/lib -Ldata_provider/build/lib
    LIBS              
    CC                gcc
    MAKE              gmake
gmake[1]: Leaving directory '/opt/wazuh/src'

Done building server
Stopping Wazuh...
Wait for success...
success
Removing old SCA policies...
Installing SCA policies...
Installing additional SCA policies...
grep: can't open /etc/os-release
grep: can't open /etc/redhat-release
cd external/cpython/ && export WPATH_LIB=/var/ossec/lib && export SOURCE_PATH=/opt/wazuh/src && export WAZUH_FFI_PATH=external/libffi/ && gmake install
gmake[1]: Entering directory '/opt/wazuh/src/external/cpython'
Creating directory /var/ossec/framework/python/bin
install: The -c and -m flags may not be used together.
Usage: install [-c dira] [-f dirb] [-i] [-m] [-M mode] [-O owner]
               [-G group] [-S] [-n dirc] [-o] [-s] file [dirx ...]
Creating directory /var/ossec/framework/python/lib
install: The -c and -m flags may not be used together.
Usage: install [-c dira] [-f dirb] [-i] [-m] [-M mode] [-O owner]
               [-G group] [-S] [-n dirc] [-o] [-s] file [dirx ...]
gmake[1]: *** [Makefile:1313: altbininstall] Error 2
gmake[1]: Leaving directory '/opt/wazuh/src/external/cpython'
gmake: *** [Makefile:2134: install_python] Error 2
install: File root was not found.
gmake: *** [Makefile:31: install] Error 2
/bin/sh: pgrep:  not found
install: File root was not found.
gmake: *** [Makefile:29: install] Error 2
find: bad option -maxdepth
find: bad option -maxdepth

 - System is AIX.
 - Init script modified to start Wazuh during boot.
Wait for success...
success
Searching for deprecated rules and decoders...
Starting Wazuh...
/var/ossec/bin/wazuh-control[4]: /var/ossec/bin/wazuh-apid:  not found
wazuh-apid: Configuration error. Exiting

 - Configuration finished properly.

 - To start Wazuh:
      /var/ossec/bin/wazuh-control start

 - To stop Wazuh:
      /var/ossec/bin/wazuh-control stop

 - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf

   Thanks for using Wazuh.
   Please don't hesitate to contact us if you need help or find
   any bugs.

   Use our public Mailing List at:
          https://groups.google.com/forum/#!forum/wazuh

   More information can be found at:
          - http://www.wazuh.com

    ---  Press ENTER to finish (maybe more information below). ---

 - Upgrade completed.

bash-5.1# 

Package

bash-5.1# rpm -ivh ./wazuh-local-4.3.11-1.aix7.1.ppc.rpm 
error: Failed dependencies:
        wazuh-local conflicts with wazuh-local-4.3.11-1.ppc
        wazuh-local conflicts with (installed) wazuh-local-4.3.11-1.ppc
Rebits commented 1 year ago

Fluentd Cannot send message to socket 'fluent_socket' error in high load environments :red_circle:

It has been detected that with a high load of alerts, the following error is generated causing some alerts to be lost:

2023/04/26 04:59:51 wazuh-analysisd: ERROR: Cannot send message to socket 'fluent_socket'. (Abort).
Rebits commented 1 year ago

Logcollector block messages by fluentd avoid rules triggering :red_circle:

In the case of configuring the sending of logcollector block messages by fluentd, no alerts will be generated for the events of this same file.

2023-04-26 10:22:21.000000000 +0000 debug.test: {"message":"{\"timestamp\":\"2023-04-26T05:22:21.850CDT\",\"rule\":{\"level\":10,\"description\":\"User logged\",\"id\":\"100010\",\"firedtimes\":6482,\"mail\":false,\"groups\":[\"local\",\"syslog\",\"sshd\"]},\"agent\":{\"id\":\"000\",\"name\":\"vrebollo\"},\"manager\":{\"name\":\"vrebollo\"},\"id\":\"1682504541.4484206\",\"full_log\":\"Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'\",\"predecoder\":{\"program_name\":\"example\",\"timestamp\":\"Dec 25 20:45:02\",\"hostname\":\"MyHost\"},\"decoder\":{\"name\":\"example\"},\"data\":{\"srcip\":\"192.168.1.100\",\"dstuser\":\"admin\"},\"location\":\"/tmp/testing_alerts2.log\"}"}
2023-04-26 10:22:51.000000000 +0000 debug.test: {"message":"Dec 25 20:45:02 MyHost example[12345]: User 'admin' logged from '192.168.1.100'"}

Note The used configuration in https://github.com/wazuh/wazuh-qa/issues/4053#issuecomment-1513452046, Fluentd - Alerts section

Rebits commented 1 year ago

Testing results :green_circle:

Package Installation :green_circle: ``` root@AIX:/$ yum install /tmp/wazuh-local-4.3.11-1.custom.aix7.1.power9.ppc.rpm AIX generic repository 1.5 kB/s | 2.7 kB 00:01 AIX generic repository 19 MB/s | 18 MB 00:00 AIX noarch repository 54 kB/s | 2.6 kB 00:00 AIX noarch repository 2.2 MB/s | 2.1 MB 00:00 AIX 7.1 specific repository 50 kB/s | 2.7 kB 00:00 AIX 7.1 specific repository 3.1 MB/s | 1.0 MB 00:00 AIX 7.2 specific repository 24 kB/s | 2.7 kB 00:00 AIX 7.2 specific repository 7.1 MB/s | 1.2 MB 00:00 Dependencies resolved. ===================================================================================================================== Package Architecture Version Repository Size ===================================================================================================================== Installing: wazuh-local ppc 4.3.11-1.custom @commandline 19 M Transaction Summary ===================================================================================================================== Install 1 Package Total size: 19 M Installed size: 58 M Is this ok [y/N]: y Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: wazuh-local-4.3.11-1.custom.ppc 1/1 Installing : wazuh-local-4.3.11-1.custom.ppc 1/1 Running scriptlet: wazuh-local-4.3.11-1.custom.ppc 1/1 Verifying : wazuh-local-4.3.11-1.custom.ppc 1/1 Installed: wazuh-local-4.3.11-1.custom.ppc Complete! root@AIX:/$ ```
MITRE DB warnings no longer appears :green_circle: MITRE errors and warnings no longer appears in the wazuh-local logs ``` 2023/05/16 11:05:58 wazuh-csyslogd: INFO: Remote syslog server not configured. Clean exit. 2023/05/16 11:05:58 wazuh-dbd: INFO: Database not configured. Clean exit. 2023/05/16 11:05:58 wazuh-integratord: INFO: Remote integrations not configured. Clean exit. 2023/05/16 11:05:58 wazuh-agentlessd: INFO: Not configured. Exiting. 2023/05/16 11:05:58 wazuh-db: INFO: Started (pid: 19988726). 2023/05/16 11:05:58 wazuh-execd: INFO: Started (pid: 21037104). 2023/05/16 11:05:58 wazuh-maild: INFO: E-Mail notification disabled. Clean Exit. 2023/05/16 11:05:58 wazuh-syscheckd: INFO: Started (pid: 19136748). 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6003): Monitoring path: '/bin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6003): Monitoring path: '/boot', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6003): Monitoring path: '/etc', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6003): Monitoring path: '/sbin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6003): Monitoring path: '/usr/bin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6003): Monitoring path: '/usr/sbin', with options 'size | permissions | owner | group | mtime | inode | hash_md5 | hash_sha1 | hash_sha256 | scheduled'. 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/mtab' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/hosts.deny' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/mail/statistics' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/random-seed' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/random.seed' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/adjtime' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/httpd/logs' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/utmpx' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/wtmpx' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/cups/certs' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/dumpdates' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6206): Ignore 'file' entry '/etc/svc/volatile' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6207): Ignore 'file' sregex '.log$|.swp$' 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6004): No diff for file: '/etc/ssl/private.key' 2023/05/16 11:05:58 wazuh-logcollector: INFO: Monitoring output of command(360): df -P 2023/05/16 11:05:58 wazuh-logcollector: INFO: Monitoring full output of command(360): netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d 2023/05/16 11:05:58 wazuh-logcollector: INFO: Monitoring full output of command(360): last -n 20 2023/05/16 11:05:58 wazuh-logcollector: INFO: (1950): Analyzing file: '/var/ossec/logs/active-responses.log'. 2023/05/16 11:05:58 wazuh-logcollector: INFO: Started (pid: 5046440). 2023/05/16 11:05:58 rootcheck: INFO: Starting rootcheck scan. 2023/05/16 11:05:58 wazuh-monitord: INFO: Started (pid: 16908396). 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6000): Starting daemon... 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 43200 seconds 2023/05/16 11:05:58 wazuh-syscheckd: INFO: (6008): File integrity monitoring scan started. 2023/05/16 11:05:58 wazuh-modulesd: INFO: Started (pid: 19595444). 2023/05/16 11:05:58 wazuh-modulesd:agent-upgrade: INFO: (8153): Module Agent Upgrade started. 2023/05/16 11:05:58 wazuh-modulesd:ciscat: INFO: Module disabled. Exiting... 2023/05/16 11:05:58 wazuh-modulesd:osquery: INFO: Module disabled. Exiting... 2023/05/16 11:05:58 wazuh-modulesd:database: INFO: Module started. 2023/05/16 11:05:58 wazuh-modulesd:download: INFO: Module started. 2023/05/16 11:05:58 wazuh-modulesd:task-manager: INFO: (8200): Module Task Manager started. 2023/05/16 11:05:59 wazuh-analysisd: INFO: Total rules enabled: '6327' 2023/05/16 11:05:59 wazuh-analysisd: INFO: Started (pid: 18153562). 2023/05/16 11:06:00 wazuh-analysisd: INFO: (7200): Logtest started ``` It has launched the performance/footprint for this package. No MITRE error or warning was generated during test time