crow1011 / wazuh2thehive

Wazuh integration TheHive
32 stars 15 forks source link

Change location to help parsing #10

Open dgray161 opened 1 year ago

dgray161 commented 1 year ago

If Wazuh produces a ton of logs, the script falls away behind and it takes forever to generate Hive cases. Is it possible to change the location where the script looks for events to push to the hive? I'd like to extract rule level 10+ and send them to another file for the script to look at and generate hive events. I've been unable to figure out how to change the location. I'm assuming the script looks at /var/ossec/logs/alerts/alerts.json and I'd like to change that. Any help would be appreciated.

crow1011 commented 1 year ago

Hi @dgray161 Wazuh passes the location of the alert file as an argument. As I remember, a separate file is created for each alert, so you cannot send all alerts of a certain type to one file, the script expects to receive a file path with one alert, not a group of alerts. You can try running the custom-w2thive.py file by passing it the necessary arguments to run it:

  1. alert_file_location 2.thive_api_key 3.thive_api

the final run command might look like this: python3 custom-w2thive.py

this will allow you to specify the path to the file, but I didn't intend to use it and can't guarantee it will work. If you describe in more detail your wazuh installation, how the script is running, the number of alerts sent from thehive and the problem, I will try to optimize the code so that delays do not occur or are acceptable.

dgray161 commented 1 year ago

I really appreciate your quick response. I will try to send you all the details possible. We're running Wazuh in docker containers and using the link from Wazuh. https://wazuh.com/blog/using-wazuh-and-thehive-for-threat-protection-and-incident-response/

In our docker configuration we're running a master and a worker node. We're using both the custom-w2thive and custom-w2hive.py scripts on both the worker and master node containers.

We are able to get alerts into the hive, but in some cases it takes awhile. While running the debug, I noticed that if an event fires a ton (a ton equaling a few thousand times) the script will parse through on at a time causing the script to fall way behind the current time. I've tried moving the checking of the level threshold up in the script to no avail. Based on my interpretation of your script, which I'm a huge python rookie. haha. It looks like this.

logger.debug('#start main')
logger.debug('#get alert file location')
alert_file_location = args[1]
logger.debug('#open alert file')
w_alert = json.load(open(alert_file_location))
logger.debug('#alert data')
logger.debug(str(w_alert))
logger.debug('#check rule level')
if w_alert['rule']['level']<=low_threshold and w_alert['rule']['firedtimes']>rule_fired:
   exit()

So, my interpretation is that it gets alerts location (i'd like to change that location), opens the file, writes it to a json, and then spits out the data. At that time you could possibly check the level threshold, but I don't believe that would speed up the script much. You can see that I attempted to use a variable for level threshold (ie =11) and times fired (ie >1), but I'm not very good or know much about python.

I'm sure you'll know a far better way, but my idea was to tail the /var/ossec/logs/alerts/alerts.json (this is where I figured the alert_file_location is looking) and send those level 10+ to a separate file and then have the custom-w2thive.py parse through the files from there. Basically, every alert that came into that new file would generate an alert because it would be above that level threshold.

What you created is awesome for a small environment. I'm just trying to see if I can optimize it for a larger environment. We've been running it for a long time and really appreciate all the work you've done to create it. If I had a better understanding of python, I might be able to figure it out myself.

crow1011 commented 1 year ago

Unfortunately, I won't be able to figure it out quickly, since I haven't worked with wazuh for a long time, but I'll try to do it over the weekend. Please correct me if I'm wrong:

  1. Would you like to set a threshold for the level of event rules you want to receive in thehive and discard the rest of them? (Have you tried changing the lvl_threshold setting?)
  2. Would you like to discard events that have a lag in the w_alert['rule']['firedtimes'] field above a given threshold?

As I remember, the script is executed in this order:

So far, I don't see a way to read alerts from two places, since the script gets the path to the alert file from Wazuh

dgray161 commented 1 year ago

No worries. It is something we're working as well, but with your being the originator, I figured you'd be a good person to reach out to. I'll reply to your questions in order below:

1 - The level threshold works just fine for sending events to the hive. The problem is that the script looks at every single event that comes into Wazuh. If you run the debug and look at the /var/ossec/logs/intergration.log you'll see each event being look at to see if it meets lvl_threshold. If there was a way to quickly look and discard an event that doesn't meet the level threshold, that'd be nice. 2 - I tried to use that rule.firedtimes and rule.level to hopefully quickly discard the event before it moved down and pulled the hive api and key. Figured that would speed it up at least a little bit.

Wazuh runs the custom-w2thive.py script against every alert that is in /var/ossec/log/alerts/alerts.json regardless of lvl_threshold

Ideally, if the script saw a lvl_threshold event that was met, it could send that alert to another file (ie:/tmp/alerts.json) and then the script would just generate hive alerts based on that file. The /var/ossec/log/alerts/alerts.json file updates in real time and if those lvl_threshold and above alerts were sent to an alternate destination, it would speed up the script to run in real time.

I hope that makes sense. I appreciate your quick replies regarding this. It appears to be something you did a long time ago.

dgray161 commented 1 year ago

Good afternoon,

Did you have any opportunities to take a look at this?