wazuh / wazuh-splunk

Wazuh - Splunk App
https://wazuh.com
GNU General Public License v2.0
50 stars 27 forks source link

App won't work in a Search Head cluster environment #447

Closed manuasir closed 5 years ago

manuasir commented 5 years ago

Hi, the app won't work in a Search Head cluster environment when it's installed by a deployer instance. It seems that the backend is not able to work properly when inserting APIs.

The steps for approaching this issue are the following at the moment:

Regards

manuasir commented 5 years ago

Warning: Backup every file before editing

Indexers

  1. Set up instances as peers.
  2. Set up some port for receiving.

Master

Dashboard: URL/en-US/manager/system/clustering

  1. Define the wazuh index by creating the indexes.conf file at /opt/splunk/etc/master-apps/_cluster/local/ in the master instance. Notice the repFactor=auto setting.

indexes.conf

[wazuh]
repFactor=auto
coldPath = $SPLUNK_DB/wazuh/colddb
enableDataIntegrityControl = 1
enableTsidxReduction = 1
homePath = $SPLUNK_DB/wazuh/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB/wazuh/thaweddb
timePeriodInSecBeforeTsidxReduction = 15552000
tsidxReductionCheckPeriodInSec = 

[wazuh-monitoring-3x]
coldPath = $SPLUNK_DB/wazuh-monitoring-3x/colddb
enableDataIntegrityControl = 1
enableTsidxReduction = 1
homePath = $SPLUNK_DB/wazuh-monitoring-3x/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB/wazuh-monitoring-3x/thaweddb
timePeriodInSecBeforeTsidxReduction = 15552000
tsidxReductionCheckPeriodInSec = 

props.conf

[wazuh]
DATETIME_CONFIG = 
INDEXED_EXTRACTIONS = json
KV_MODE = none
NO_BINARY_CHECK = true
category = Application
disabled = false
pulldown_type = true
FIELDALIAS-rule.groups = "rule.groups{}" AS "rule.groups"
FIELDALIAS-dstuser = "data.dstuser" AS srcuser
FIELDALIAS-srcip = "data.srcip" AS srcip
FIELDALIAS-data.title = "data.title" AS title
FIELDALIAS-oscap.scan.id = "data.oscap.scan.id" AS "oscap.scan.id"
FIELDALIAS-oscap.scan.content = "data.oscap.scan.content" AS "oscap.scan.content"
FIELDALIAS-oscap.scan.profile.title = "data.oscap.scan.profile.title" AS "oscap.scan.profile.title"
FIELDALIAS-oscap.scan.score = "data.oscap.scan.score" AS "oscap.scan.score"
FIELDALIAS-oscap.check.title = "data.oscap.check.title" AS "oscap.check.title"
FIELDALIAS-oscap.check.result = "data.oscap.check.result" AS "oscap.check.result"
FIELDALIAS-oscap. check.severity = "data.oscap.check.severity" AS "oscap.check.severity"
FIELDALIAS-audit.exe = "data.audit.exe" AS "audit.exe"
FIELDALIAS-audit.file.mode = "data.audit.file.mode" AS "audit.file.mode"
FIELDALIAS-audit.egid = "data.audit.egid" AS "audit.egid"
FIELDALIAS-audit.euid = "data.audit.euid" AS "audit.euid"
  1. Push it to the indexer peers:

    /opt/splunk/bin/splunk apply cluster-bundle

    Check the status:

    splunk show cluster-bundle-status
  2. Restart instances

Forwarders

Manual mode

  1. Edit the /opt/splunkforwarder/etc/system/local/outputs.conf
[tcpout]
defaultGroup=indexer1,indexer2

[tcpout:indexer1]
server=IP_FIRST_INDEXER:9997

[tcpout:indexer2]
server=IP_SECOND_INDEXER:9997

Auto-discover mode

Configure the master node to enable indexer discovery In /opt/splunk/etc/system/local/server.conf on the master, add this stanza:

[indexer_discovery]
pass4SymmKey = my_secret
indexerWeightByDiskCapacity = true
[indexer_discovery:master1]
pass4SymmKey = my_secret
master_uri = https://10.152.31.202:8089

[tcpout:group1]
autoLBFrequency = 30
forceTimebasedAutoLB = true
indexerDiscovery = master1
useACK=true

[tcpout]
defaultGroup = group1
  1. They should also have opt/splunkforwarder/etc/system/local/props.conf from here:https://raw.githubusercontent.com/wazuh/wazuh/3.7/extensions/splunk/props.conf
[wazuh]
DATETIME_CONFIG =
INDEXED_EXTRACTIONS = json
KV_MODE = none
NO_BINARY_CHECK = true
category = Application
disabled = false
pulldown_type = true
  1. And the opt/splunkforwarder/etc/system/local/inputs.conf file: https://raw.githubusercontent.com/wazuh/wazuh/3.7/extensions/splunk/inputs.conf
    [monitor:///var/ossec/logs/alerts/alerts.json]
    disabled = 0
    host = MANAGER_HOSTNAME
    index = wazuh
    sourcetype = wazuh

Change hostname

sed -i "s:MANAGER_HOSTNAME:$(hostname):g" /opt/splunkforwarder/etc/system/local/inputs.conf

App

Search Head cluster

  1. Set up deployer

Deploy an Splunk instance out of the cluster, this will be the deployer machine, which will install apps in the SH cluster. Edit server.conf file adding the following stanza:

[shclustering]
pass4SymmKey = <yoursecuritykey>
shcluster_label = <shcluster1>
  1. Deploy instances for search heads.

Caution: Always use new instances. The process of adding an instance to a search head cluster overwrites any configurations or apps currently resident on the instance.

For each instance that you want to include in the cluster, run the splunk init shcluster-config command and restart the instance:

splunk init shcluster-config -auth <username>:<password> -mgmt_uri <URI>:<management_port> -replication_port <replication_port> -replication_factor <n> -conf_deploy_fetch_url <URL>:<management_port> -secret <security_key> -shcluster_label <label>

splunk restart 

Note the following:

This command is only for cluster members. Do not run this command on the deployer. You can only execute this command on an instance that is up and running.

Important:

Example:

splunk init shcluster-config -auth admin:changed -mgmt_uri https://sh1.example.com:8089 -replication_port 34567 -replication_factor 2 -conf_deploy_fetch_url https://10.160.31.200:8089 -secret mykey -shcluster_label shcluster1

splunk restart 
  1. Bring up the cluster captain
splunk bootstrap shcluster-captain -servers_list "<URI>:<management_port>,<URI>:<management_port>,..." -auth <username>:<password>

Note the following:

splunk bootstrap shcluster-captain -servers_list "https://sh1.example.com:8089,https://sh2.example.com:8089,https://sh3.example.com:8089,https://sh4.example.com:8089" -auth admin:changed
  1. Integrate the SH cluster with the Indexer cluster
splunk edit cluster-config -mode searchhead -master_uri https://10.152.31.202:8089 -secret newsecret123 

splunk restart

You must run this CLI command on each member of the search head cluster.

This example specifies:

Deployer

The app was cloned into the deployer instance and pushed to the Search Head peers with the following command:

 /opt/splunk/bin/splunk apply shcluster-bundle --answer-yes -target https://<sh-node>:8089 -auth admin:changeme