Closed V1D1AN closed 3 years ago
Hi @V1D1AN -- the dirmon
plugin is by default recursive, so any new sub-directories that are created will be monitored for newly files. For testing purposes, can you use the stdout
connector, rather than the es-search
one? Can you also validate that the newly created files are visible and the stoq container has access to see that data?
Hi @mlaferrera
I have test again and dirmon is recursive.. But can we drop the log with the error ?? In the stoq.cfg i have change the value log_level ..
Can we do that so for the plugin ?
I have change the plugin es-search by filedir, i want the option compactly with the result and i don't have this option with stdout
I'm not sure I fully understand. For log levels, that can be defined at the command line with --log-level
or in the stoq.cfg
file. The log level will be global, not just for specific plugins.
The compactly option is available in the filedir
plugin, but not currently in the stdout
plugin.
For my test platform... I want to use Suricata/STOQ/ELK as a docker. STOQ analyzes the file extraction from Suricata, generates a json output file. Logstash analyzes this json file and index it in Elasticsearch.
For the moment, I want to use the PEINFO plugin to analyse my extract EXE files from SURICATA.
Unfortunately, STOQ generates errors when analyzing files that are not EXE files but directories (example: /var/log/suricata/filestore/ab/ ).
In my elasticsearch, I have these errors ...
{ "plugin_name": "peinfo", "error": "worker:failed to scan: File \"/usr/local/lib/python3.7/site-packages/pefile.py\", line 1852, in __parse__ ; pefile.PEFormatError: 'DOS Header magic not found.'", "payload_id": "1a7f1bcc-ea24-4522-9248-6710e9cc4f6d" }
if a plugin generates an error, stoq can don't log it?
My stoq.cfg:
[core]
# What syntax should logs be generated as?
# Valid options: text, json
log_syntax: json
# What is the logging level?
# Valid options: DEBUG, INFO, WARNING, ERROR, CRITICAL
log_level: CRITICAL
# What is the maximum size of a log file before being rotated?
log_maxbytes: 1500000
# How many log files should be kept after rotation?
log_backup_count: 5
# Where are the plugins located? For multiple paths, separate by comma
plugin_dir_list: /home/stoq/.stoq/plugins/
# What is the maximum recursion depth for the dispatcher?
max_recursion: 3
# What is the maximum size of the internal thread queue?
max_queue: 100
# How many consumers should be instantiated when using a provider?
# Note: Ensure this is a reasonable number for your use. Setting this
# too high may cause memory issues or unpredictable oddities
# when interacting with other plugins/services that have
# timeouts or heartbeats.
#
# provider_consumers: 2
# Which plugins should be loaded by default, if no plugin
# of it's class is loaded. Multiple plugins may be defined,
# but must be comma separated.
providers: dirmon
# archivers:
connectors: filedir
# decorators:
# dispatchers:
# When disptching, always send the payload to the listed worker
# plugins. Multiple plugins may be defined, but must be comma
# separated.
# always_dispatch:
[dirmon]
source_dir=/files
[filedir]
results_dir=/tmp
compactly=True
That error is coming from the peinfo
plugin because it appears you are scanning a payload that does not have a valid header with a plugin that expects one. This is completely expected and is as designed. I would suggest leveraging dispatching[1] to automatically route payloads based on the dispatcher results to avoid errors such as this in your results.
[1] https://stoq-framework.readthedocs.io/en/latest/dev/dispatchers.html
If I add in my stoq.cfg :
[core] dispatcher: yara
I must create a plugin or script python for the detection of the PE ?
If you are using the yara dispatcher plugin, you can leverage yara rules to route them to specific plugins. You can find some examples in the yara plugin repo[1]
[1] https://github.com/PUNCH-Cyber/stoq-plugins-public/blob/master/yara/yarascan/rules/dispatcher.yar
Thanks for your help 👍 I will test that
My pleasure! Glad I could be of help.
Hi ,
I've just discovered your project and I find it very interesting. I have a problem with dirmon ..
I use the last stoq version with docker .. I use suricata 5 with the file extraction... I use the version 2 of the File extraction and all my files are in "/var/log/suricata/filestore/ae/ae........" or "/var/suricata/filestore/ab/ab........."
With docker, i use the bind mount, docker run --rm --network=XXX -u root -v /var/log/suricata/filestore:/files -v /root/rules:/rules --name stoq -ti --entrypoint /bin/bash stoq:3.0.1
For the moment i test stoq ;) then i use the docker run with another entrypoint
Now .. i want scan all new create file of my suricata directory ... stoq run -a peinfo hash_ssdeep yara -P dirmon -C es-search --plugin-opts es-search:es_host=http://user:password@elasticsearch es-search:es_index="stoq" yara:worker_rules="/rules/toto.yar" dirmon:source_dir=/files/
Howerver ... Suricata create a directory with the two first letters of the sha256 of the file .. and dirmon analyse this create dir and not the file inside this directory ...
The i have this error : pefile.PEFormatError: 'DOS Header magic not found.' It's normal, it analyzes a directory
Have you got an idea for resolv my problem ?? dirmon has no recursive function
I test with "dirmon:source_dir=/files/*/" but it doesn't work ..
Thanks for your help