Yelp / elastalert

Easy & Flexible Alerting With ElasticSearch
https://elastalert.readthedocs.org
Apache License 2.0
8k stars 1.73k forks source link

Cardinality Rule uses 91% CPU usage #1568

Open sathishdsgithub opened 6 years ago

sathishdsgithub commented 6 years ago

@Qmando @Dmitry1987 The below rule uses 91% of the CPU all the time. Can someone tell how to fix this?. Also, I would like to know is there any option to run elastalert in multithread. I have 16 core CPU , but elastalert uses only one CPU core :-( . I wanted to know can we run python elastalert in parallel processing?

# ElastAlert Rule Name

name:  Rule Name

# ElasticSearch Index Name

index: firewalls_*

# Timestamp added to overcome graylog parsing issue

timestamp_field: timestamp
timestamp_type: custom
timestamp_format: '%Y-%m-%d %H:%M:%S.%f'
timestamp_format_expr: 'ts[:23] + ts[26:]'

doc_type: message

# Internal Outbreak Aggregation

type: cardinality

query_key: [srx-source-address, srx-destination-port]
cardinality_field: srx-destination-address
max_cardinality: 75

timeframe:

#      hours: 1
#      seconds: 60
      minutes: 10

filter:
 - query_string:
       query: "srx-rt-flow: RT_FLOW_SESSION_DENY"
import:
        - filter-whitelist-ip
        - filter-whitelist-zone
        - va-scanner-ip
        - scanners-ip
        - filter-whitelist-source-port

aggregation:

   minutes: 20

aggregation_key: ['srx-source-address', 'srx-destination-port']

############### Email Alert Configuration SMTP #############################

alert:
 - "email"
smtp_host: smtpio.mail.com
smtp_port: 25
smtp_ssl : false
from_addr: "Elastalert@mail.com"

email:

   - sample@mail.com

# Email will contain following Subject TAG and Body message

alert_subject: "Internal outbreak for port {1} from {0} @ {2}"
alert_subject_args:
  - srx-source-address
  - srx-destination-port
  - timestamp

html_table_title: "<h3> Internal Outbreak has been detected </h3> <p style='color:#4169E1;' style=font-family:verdana> Hi Team <br/><br/> Elastalert has identified an  bet
ween following source and destination IP address, do something about it</p>"

alert_text_type: alert_text_only

alert_text: |

email_type: "html"

summary_table_fields_html:

    - some fields

image

Dmitry1987 commented 6 years ago

Wow, it makes your elastalert work hard? :D not just elasticsearch itself during the query?

never happened to me... i run it in a very tiny container.. it does eat up RAM up to like 4GB sometimes... but not CPU. Maybe it's your whitelists, some extra data that causes hard work on elastalert side...

what are those ?

import:
        - filter-whitelist-ip
        - filter-whitelist-zone
        - va-scanner-ip
        - scanners-ip
        - filter-whitelist-source-port
sathishdsgithub commented 6 years ago

@Dmitry1987 @Qmando

It contains several NOT filter. I have created this to whitelist IP address and zones something like below.


filter:
   - query:
        query_string:
           query: "NOT srx-source-address: (10.10.10.1 OR 10.10.10.2  OR 10.10.10.3 )"
Dmitry1987 commented 6 years ago

Well, this should apply load to elasticsearch itself and not elastalert... Weird. Sorry have no idea, maybe try with and without them whitelists to verify that they cause it. Otherwise it might just be some bad version of elastalert? Or something wrong in general configs.

On Feb 25, 2018 16:37, "sathishdsgithub" notifications@github.com wrote:

@Dmitry1987 https://github.com/dmitry1987 @Qmando https://github.com/qmando

It contains several NOT filter. I have created this to whitelist IP address something like below.

filter:

  • query: query_string: query: "NOT srx-source-address: (10.10.10.1 OR 10.10.10.2 OR 10.10.10.3 )"

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Yelp/elastalert/issues/1568#issuecomment-368314388, or mute the thread https://github.com/notifications/unsubscribe-auth/AI71-FisThX2IgEZccRJQ6t-rih-voFMks5tYXAjgaJpZM4SQyvK .